modelId
string
author
string
last_modified
timestamp[us, tz=UTC]
downloads
int64
likes
int64
library_name
string
tags
sequence
pipeline_tag
string
createdAt
timestamp[us, tz=UTC]
card
string
hsa5/ppo-LunarLander-v2
hsa5
2025-05-30T08:21:02Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2025-05-30T08:20:42Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 246.60 +/- 23.12 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
fatmhd1995/jd_inclusive_detection_ft_phi3_new
fatmhd1995
2025-05-30T08:20:19Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "mistral", "trl", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-05-29T15:16:36Z
--- base_model: unsloth/phi-3-mini-4k-instruct-bnb-4bit tags: - text-generation-inference - transformers - unsloth - mistral - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** fatmhd1995 - **License:** apache-2.0 - **Finetuned from model :** unsloth/phi-3-mini-4k-instruct-bnb-4bit This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
TheMindExpansionNetwork/M1NDB0T-DR34M3R-R1-0528-Qwen3-8B-Q4_K_M-GGUF
TheMindExpansionNetwork
2025-05-30T08:20:09Z
1
0
transformers
[ "transformers", "gguf", "llama-factory", "llama-cpp", "gguf-my-repo", "base_model:TheMindExpansionNetwork/M1NDB0T-DR34M3R-R1-0528-Qwen3-8B", "base_model:quantized:TheMindExpansionNetwork/M1NDB0T-DR34M3R-R1-0528-Qwen3-8B", "endpoints_compatible", "region:us" ]
null
2025-05-30T08:19:21Z
--- library_name: transformers tags: - llama-factory - llama-cpp - gguf-my-repo base_model: TheMindExpansionNetwork/M1NDB0T-DR34M3R-R1-0528-Qwen3-8B --- # TheMindExpansionNetwork/M1NDB0T-DR34M3R-R1-0528-Qwen3-8B-Q4_K_M-GGUF This model was converted to GGUF format from [`TheMindExpansionNetwork/M1NDB0T-DR34M3R-R1-0528-Qwen3-8B`](https://huggingface.co/TheMindExpansionNetwork/M1NDB0T-DR34M3R-R1-0528-Qwen3-8B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/TheMindExpansionNetwork/M1NDB0T-DR34M3R-R1-0528-Qwen3-8B) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo TheMindExpansionNetwork/M1NDB0T-DR34M3R-R1-0528-Qwen3-8B-Q4_K_M-GGUF --hf-file m1ndb0t-dr34m3r-r1-0528-qwen3-8b-q4_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo TheMindExpansionNetwork/M1NDB0T-DR34M3R-R1-0528-Qwen3-8B-Q4_K_M-GGUF --hf-file m1ndb0t-dr34m3r-r1-0528-qwen3-8b-q4_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo TheMindExpansionNetwork/M1NDB0T-DR34M3R-R1-0528-Qwen3-8B-Q4_K_M-GGUF --hf-file m1ndb0t-dr34m3r-r1-0528-qwen3-8b-q4_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo TheMindExpansionNetwork/M1NDB0T-DR34M3R-R1-0528-Qwen3-8B-Q4_K_M-GGUF --hf-file m1ndb0t-dr34m3r-r1-0528-qwen3-8b-q4_k_m.gguf -c 2048 ```
QuantTrio/DeepSeek-R1-0528-Qwen3-8B-Int8-W8A16
QuantTrio
2025-05-30T08:18:11Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "Instruct", "Chat", "Reason Model", "Quantization", "conversational", "en", "zh", "arxiv:2501.12948", "base_model:deepseek-ai/DeepSeek-R1-0528-Qwen3-8B", "base_model:quantized:deepseek-ai/DeepSeek-R1-0528-Qwen3-8B", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "compressed-tensors", "region:us" ]
text-generation
2025-05-30T07:50:32Z
--- frameworks: - Pytorch license: mit library_name: transformers pipeline_tag: text-generation base_model: - deepseek-ai/DeepSeek-R1-0528-Qwen3-8B language: - en - zh tags: - Instruct - Chat - Reason Model - Quantization tools: - vllm base_model_relation: quantized tasks: - text-generation --- ### <span style="color:red">重要:友情提醒,推理本模型时,请按照如下官方指引:</span> #### 对于思考模式(enable_thinking=True, 默认为 True) > 请使用 Temperature=0.6 、 TopP=0.95 、 TopK=20 和 MinP=0 ( generation_config.json 中的默认设置)。请勿使用<b>贪婪解码(greedy decoding)</b>,因为它可能导致性能下降和无限循环。如需更详细的指导,请参阅最佳实践部分。 #### 对于非思考模式(enable_thinking=False) > 建议使用 Temperature=0.7 、 TopP=0.8 、 TopK=20 和 MinP=0 。如需更详细的指导,请参阅最佳实践部分。 #### 📖 关于量化损失方面的研究,可阅读公众号“觉察流”文章👇</br> 《[Reason Model 的“瘦身计划”:量化技术的得与失](https://mp.weixin.qq.com/s/NMGq4UUkfo8GMix5LHnWCg)》 # DeepSeek-R1-0528-Qwen3-8B-Int8-W8A16 量化高精校准 原模型 [deepseek-ai/DeepSeek-R1-0528-Qwen3-8B](https://www.modelscope.cn/models/deepseek-ai/DeepSeek-R1-0528-Qwen3-8B) --- #### _作者在此 👇🏻 扫一扫_ <img src="https://www.modelscope.cn/models/okwinds/GPT-2/resolve/master/qrcode_for_jcl_258.jpg" /> --- ## 下载 SDK下载 ```bash #安装ModelScope pip install modelscope ``` ```python #SDK模型下载 from modelscope import snapshot_download model_dir = snapshot_download('okwinds/DeepSeek-R1-0528-Qwen3-8B-Int8-W8A16') ``` Git下载 ``` #Git模型下载 git clone https://www.modelscope.cn/okwinds/DeepSeek-R1-0528-Qwen3-8B-Int8-W8A16.git ``` ## 模型概述 DeepSeek-R1-0528-Qwen3-8B-Int8-W8A16 是一个基于 DeepSeek-R1-0528-Qwen3-8B 的 INT8 量化并校准的模型。 - **模型名称:** DeepSeek-R1-0528-Qwen3-8B-Int8-W8A16 - **模型架构:** Qwen3 - **权重量化:** INT8 该模型通过将 [deepseek-ai/DeepSeek-R1-0528-Qwen3-8B](https://www.modelscope.cn/models/deepseek-ai/DeepSeek-R1-0528-Qwen3-8B) 的权重量化为 INT8 数据类型而实现。 量化过程将每个参数从 16bit 减少到 8bit,将模型占用磁盘空间大小,以及推理时加载模型需要的GPU显存空间,减少到了大约为原模型的1/2。 量化过程中,只有 transformer 中的 Linear 层权重是量化的,其他层均保持为原模型数据类型,采用混合精度的计算方式,尽可能减少模型量化后所带来的精度损失。 在量化过程中,使用了[AutoGPTQ](https://github.com/AutoGPTQ/AutoGPTQ),采用 Symmetric group-wise 方式量化。并在此次量化过程中,进行了数据校准,以提升模型生成精度。(相对比 BF16 做到几乎无损) > <span style="color: red;">注意:本模型需要 compute capability > 8.0(Ampere、Ada Lovelace、Hopper 架构)的 Nvidia GPU 来支持 INT8 **混合精度**计算。</span> ## 部署推理 #### 推荐使用 vLLM>=0.8.4 ( transformers>=4.51.0 ) Openai api 兼容模式 ```bash >>> vllm serve "/home/gavin/llm/DeepSeek-R1-0528-Qwen3-8B-Int8-W8A16" --host 0.0.0.0 --port 8000 --gpu-memory-utilization 0.9 --served-model-name "DeepSeek-R1-0528-Qwen3-8B-Int8-W8A16" ``` --- # 附录 DeepSeek-R1-0528-Qwen3-8B 介绍 # DeepSeek-R1-0528 <!-- markdownlint-disable first-line-h1 --> <!-- markdownlint-disable html --> <!-- markdownlint-disable no-duplicate-header --> <div align="center"> <img src="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/logo.svg?raw=true" width="60%" alt="DeepSeek-V3" /> </div> <hr> <div align="center" style="line-height: 1;"> <a href="https://www.deepseek.com/" target="_blank" style="margin: 2px;"> <img alt="Homepage" src="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/badge.svg?raw=true" style="display: inline-block; vertical-align: middle;"/> </a> <a href="https://chat.deepseek.com/" target="_blank" style="margin: 2px;"> <img alt="Chat" src="https://img.shields.io/badge/🤖%20Chat-DeepSeek%20R1-536af5?color=536af5&logoColor=white" style="display: inline-block; vertical-align: middle;"/> </a> <a href="https://huggingface.co/deepseek-ai" target="_blank" style="margin: 2px;"> <img alt="Hugging Face" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-DeepSeek%20AI-ffc107?color=ffc107&logoColor=white" style="display: inline-block; vertical-align: middle;"/> </a> </div> <div align="center" style="line-height: 1;"> <a href="https://discord.gg/Tc7c45Zzu5" target="_blank" style="margin: 2px;"> <img alt="Discord" src="https://img.shields.io/badge/Discord-DeepSeek%20AI-7289da?logo=discord&logoColor=white&color=7289da" style="display: inline-block; vertical-align: middle;"/> </a> <a href="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/qr.jpeg?raw=true" target="_blank" style="margin: 2px;"> <img alt="Wechat" src="https://img.shields.io/badge/WeChat-DeepSeek%20AI-brightgreen?logo=wechat&logoColor=white" style="display: inline-block; vertical-align: middle;"/> </a> <a href="https://twitter.com/deepseek_ai" target="_blank" style="margin: 2px;"> <img alt="Twitter Follow" src="https://img.shields.io/badge/Twitter-deepseek_ai-white?logo=x&logoColor=white" style="display: inline-block; vertical-align: middle;"/> </a> </div> <div align="center" style="line-height: 1;"> <a href="LICENSE" style="margin: 2px;"> <img alt="License" src="https://img.shields.io/badge/License-MIT-f5de53?&color=f5de53" style="display: inline-block; vertical-align: middle;"/> </a> </div> <p align="center"> <a href="https://arxiv.org/pdf/2501.12948"><b>Paper Link</b>👁️</a> </p> ## 1. Introduction The DeepSeek R1 model has undergone a minor version upgrade, with the current version being DeepSeek-R1-0528. In the latest update, DeepSeek R1 has significantly improved its depth of reasoning and inference capabilities by leveraging increased computational resources and introducing algorithmic optimization mechanisms during post-training. The model has demonstrated outstanding performance across various benchmark evaluations, including mathematics, programming, and general logic. Its overall performance is now approaching that of leading models, such as O3 and Gemini 2.5 Pro. <p align="center"> <img width="80%" src="figures/benchmark.png"> </p> Compared to the previous version, the upgraded model shows significant improvements in handling complex reasoning tasks. For instance, in the AIME 2025 test, the model’s accuracy has increased from 70% in the previous version to 87.5% in the current version. This advancement stems from enhanced thinking depth during the reasoning process: in the AIME test set, the previous model used an average of 12K tokens per question, whereas the new version averages 23K tokens per question. Beyond its improved reasoning capabilities, this version also offers a reduced hallucination rate, enhanced support for function calling, and better experience for vibe coding. ## 2. Evaluation Results ### DeepSeek-R1-0528 For all our models, the maximum generation length is set to 64K tokens. For benchmarks requiring sampling, we use a temperature of $0.6$, a top-p value of $0.95$, and generate 16 responses per query to estimate pass@1. <div align="center"> | Category | Benchmark (Metric) | DeepSeek R1 | DeepSeek R1 0528 |----------|----------------------------------|-----------------|---| | General | | | MMLU-Redux (EM) | 92.9 | 93.4 | | MMLU-Pro (EM) | 84.0 | 85.0 | | GPQA-Diamond (Pass@1) | 71.5 | 81.0 | | SimpleQA (Correct) | 30.1 | 27.8 | | FRAMES (Acc.) | 82.5 | 83.0 | | Humanity's Last Exam (Pass@1) | 8.5 | 17.7 | Code | | | LiveCodeBench (2408-2505) (Pass@1) | 63.5 | 73.3 | | Codeforces-Div1 (Rating) | 1530 | 1930 | | SWE Verified (Resolved) | 49.2 | 57.6 | | Aider-Polyglot (Acc.) | 53.3 | 71.6 | Math | | | AIME 2024 (Pass@1) | 79.8 | 91.4 | | AIME 2025 (Pass@1) | 70.0 | 87.5 | | HMMT 2025 (Pass@1) | 41.7 | 79.4 | | | CNMO 2024 (Pass@1) | 78.8 | 86.9 | Tools | | | BFCL_v3_MultiTurn (Acc) | - | 37.0 | | | Tau-Bench (Pass@1) | - | 53.5(Airline)/63.9(Retail) </div> Note: We use Agentless framework to evaluate model performance on SWE-Verified. We only evaluate text-only prompts in HLE testsets. GPT-4.1 is employed to act user role in Tau-bench evaluation. ### DeepSeek-R1-0528-Qwen3-8B Meanwhile, we distilled the chain-of-thought from DeepSeek-R1-0528 to post-train Qwen3 8B Base, obtaining DeepSeek-R1-0528-Qwen3-8B. This model achieves state-of-the-art (SOTA) performance among open-source models on the AIME 2024, surpassing Qwen3 8B by +10.0% and matching the performance of Qwen3-235B-thinking. We believe that the chain-of-thought from DeepSeek-R1-0528 will hold significant importance for both academic research on reasoning models and industrial development focused on small-scale models. | | AIME 24 | AIME 25 | HMMT Feb 25 | GPQA Diamond | LiveCodeBench (2408-2505) | |--------------------------------|---------|---------|-------------|--------------|---------------------------| | Qwen3-235B-A22B | 85.7 | 81.5 | 62.5 | 71.1 | 66.5 | | Qwen3-32B | 81.4 | 72.9 | - | 68.4 | - | | Qwen3-8B | 76.0 | 67.3 | - | 62.0 | - | | Phi-4-Reasoning-Plus-14B | 81.3 | 78.0 | 53.6 | 69.3 | - | | Gemini-2.5-Flash-Thinking-0520 | 82.3 | 72.0 | 64.2 | 82.8 | 62.3 | | o3-mini (medium) | 79.6 | 76.7 | 53.3 | 76.8 | 65.9 | | DeepSeek-R1-0528-Qwen3-8B | 86.0 | 76.3 | 61.5 | 61.1 | 60.5 | ## 3. Chat Website & API Platform You can chat with DeepSeek-R1 on DeepSeek's official website: [chat.deepseek.com](https://chat.deepseek.com/sign_in), and switch on the button "DeepThink" We also provide OpenAI-Compatible API at DeepSeek Platform: [platform.deepseek.com](https://platform.deepseek.com/) ## 4. How to Run Locally Please visit [DeepSeek-R1](https://github.com/deepseek-ai/DeepSeek-R1) repository for more information about running DeepSeek-R1-0528 locally. Compared to previous versions of DeepSeek-R1, the usage recommendations for DeepSeek-R1-0528 have the following changes: 1. System prompt is supported now. 2. It is not required to add "\<think\>\n" at the beginning of the output to force the model into thinking pattern. The model architecture of DeepSeek-R1-0528-Qwen3-8B is identical to that of Qwen3-8B, but it shares the same tokenizer configuration as DeepSeek-R1-0528. This model can be run in the same manner as Qwen3-8B, but it is essential to ensure that all configuration files are sourced from our repository rather than the original Qwen3 project. ### System Prompt In the official DeepSeek web/app, we use the same system prompt with a specific date. ``` 该助手为DeepSeek-R1,由深度求索公司创造。 今天是{current date}。 ``` For example, ``` 该助手为DeepSeek-R1,由深度求索公司创造。 今天是2025年5月28日,星期一。 ``` ### Temperature In our web and application environments, the temperature parameter $T_{model}$ is set to 0.6. ### Prompts for File Uploading and Web Search For file uploading, please follow the template to create prompts, where {file_name}, {file_content} and {question} are arguments. ``` file_template = \ """[file name]: {file_name} [file content begin] {file_content} [file content end] {question}""" ``` For Web Search, {search_results}, {cur_date}, and {question} are arguments. For Chinese query, we use the prompt: ``` search_answer_zh_template = \ '''# 以下内容是基于用户发送的消息的搜索结果: {search_results} 在我给你的搜索结果中,每个结果都是[webpage X begin]...[webpage X end]格式的,X代表每篇文章的数字索引。请在适当的情况下在句子末尾引用上下文。请按照引用编号[citation:X]的格式在答案中对应部分引用上下文。如果一句话源自多个上下文,请列出所有相关的引用编号,例如[citation:3][citation:5],切记不要将引用集中在最后返回引用编号,而是在答案对应部分列出。 在回答时,请注意以下几点: - 今天是{cur_date}。 - 并非搜索结果的所有内容都与用户的问题密切相关,你需要结合问题,对搜索结果进行甄别、筛选。 - 对于列举类的问题(如列举所有航班信息),尽量将答案控制在10个要点以内,并告诉用户可以查看搜索来源、获得完整信息。优先提供信息完整、最相关的列举项;如非必要,不要主动告诉用户搜索结果未提供的内容。 - 对于创作类的问题(如写论文),请务必在正文的段落中引用对应的参考编号,例如[citation:3][citation:5],不能只在文章末尾引用。你需要解读并概括用户的题目要求,选择合适的格式,充分利用搜索结果并抽取重要信息,生成符合用户要求、极具思想深度、富有创造力与专业性的答案。你的创作篇幅需要尽可能延长,对于每一个要点的论述要推测用户的意图,给出尽可能多角度的回答要点,且务必信息量大、论述详尽。 - 如果回答很长,请尽量结构化、分段落总结。如果需要分点作答,尽量控制在5个点以内,并合并相关的内容。 - 对于客观类的问答,如果问题的答案非常简短,可以适当补充一到两句相关信息,以丰富内容。 - 你需要根据用户要求和回答内容选择合适、美观的回答格式,确保可读性强。 - 你的回答应该综合多个相关网页来回答,不能重复引用一个网页。 - 除非用户要求,否则你回答的语言需要和用户提问的语言保持一致。 # 用户消息为: {question}''' ``` For English query, we use the prompt: ``` search_answer_en_template = \ '''# The following contents are the search results related to the user's message: {search_results} In the search results I provide to you, each result is formatted as [webpage X begin]...[webpage X end], where X represents the numerical index of each article. Please cite the context at the end of the relevant sentence when appropriate. Use the citation format [citation:X] in the corresponding part of your answer. If a sentence is derived from multiple contexts, list all relevant citation numbers, such as [citation:3][citation:5]. Be sure not to cluster all citations at the end; instead, include them in the corresponding parts of the answer. When responding, please keep the following points in mind: - Today is {cur_date}. - Not all content in the search results is closely related to the user's question. You need to evaluate and filter the search results based on the question. - For listing-type questions (e.g., listing all flight information), try to limit the answer to 10 key points and inform the user that they can refer to the search sources for complete information. Prioritize providing the most complete and relevant items in the list. Avoid mentioning content not provided in the search results unless necessary. - For creative tasks (e.g., writing an essay), ensure that references are cited within the body of the text, such as [citation:3][citation:5], rather than only at the end of the text. You need to interpret and summarize the user's requirements, choose an appropriate format, fully utilize the search results, extract key information, and generate an answer that is insightful, creative, and professional. Extend the length of your response as much as possible, addressing each point in detail and from multiple perspectives, ensuring the content is rich and thorough. - If the response is lengthy, structure it well and summarize it in paragraphs. If a point-by-point format is needed, try to limit it to 5 points and merge related content. - For objective Q&A, if the answer is very brief, you may add one or two related sentences to enrich the content. - Choose an appropriate and visually appealing format for your response based on the user's requirements and the content of the answer, ensuring strong readability. - Your answer should synthesize information from multiple relevant webpages and avoid repeatedly citing the same webpage. - Unless the user requests otherwise, your response should be in the same language as the user's question. # The user's message is: {question}''' ```
RoyRoyRpy/test_fine-tuned-visionllama_10_epo1
RoyRoyRpy
2025-05-30T08:17:36Z
0
0
peft
[ "peft", "tensorboard", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:meta-llama/Llama-3.2-11B-Vision-Instruct", "base_model:adapter:meta-llama/Llama-3.2-11B-Vision-Instruct", "license:llama3.2", "region:us" ]
null
2025-05-30T08:16:59Z
--- library_name: peft license: llama3.2 base_model: meta-llama/Llama-3.2-11B-Vision-Instruct tags: - trl - sft - generated_from_trainer model-index: - name: test_fine-tuned-visionllama_10_epo1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # test_fine-tuned-visionllama_10_epo1 This model is a fine-tuned version of [meta-llama/Llama-3.2-11B-Vision-Instruct](https://huggingface.co/meta-llama/Llama-3.2-11B-Vision-Instruct) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 10 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 80 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 1 ### Training results ### Framework versions - PEFT 0.13.0 - Transformers 4.45.1 - Pytorch 2.4.0+cu121 - Datasets 3.0.1 - Tokenizers 0.20.3
sourav-83/ppo-LunarLander-v2
sourav-83
2025-05-30T08:17:25Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2025-05-30T08:16:58Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 258.94 +/- 23.09 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
RiggityWrckd/Qwen2.5-Omni-7B-Q8_0-GGUF
RiggityWrckd
2025-05-30T08:16:20Z
0
0
transformers
[ "transformers", "gguf", "multimodal", "llama-cpp", "gguf-my-repo", "any-to-any", "en", "base_model:Qwen/Qwen2.5-Omni-7B", "base_model:quantized:Qwen/Qwen2.5-Omni-7B", "license:other", "endpoints_compatible", "region:us", "conversational" ]
any-to-any
2025-05-30T08:15:32Z
--- license: other license_name: apache-2.0 license_link: https://huggingface.co/Qwen/Qwen2.5-Omni-7B/blob/main/LICENSE language: - en tags: - multimodal - llama-cpp - gguf-my-repo library_name: transformers pipeline_tag: any-to-any base_model: Qwen/Qwen2.5-Omni-7B --- # RiggityWrckd/Qwen2.5-Omni-7B-Q8_0-GGUF This model was converted to GGUF format from [`Qwen/Qwen2.5-Omni-7B`](https://huggingface.co/Qwen/Qwen2.5-Omni-7B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/Qwen/Qwen2.5-Omni-7B) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo RiggityWrckd/Qwen2.5-Omni-7B-Q8_0-GGUF --hf-file qwen2.5-omni-7b-q8_0.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo RiggityWrckd/Qwen2.5-Omni-7B-Q8_0-GGUF --hf-file qwen2.5-omni-7b-q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo RiggityWrckd/Qwen2.5-Omni-7B-Q8_0-GGUF --hf-file qwen2.5-omni-7b-q8_0.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo RiggityWrckd/Qwen2.5-Omni-7B-Q8_0-GGUF --hf-file qwen2.5-omni-7b-q8_0.gguf -c 2048 ```
kristaller486/sokol-1.5b-02-lora
kristaller486
2025-05-30T08:15:58Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-05-30T08:15:24Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
tuanku007/flan-t5-small-samsum
tuanku007
2025-05-30T08:15:30Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:google/flan-t5-small", "base_model:finetune:google/flan-t5-small", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2025-05-30T05:25:50Z
--- library_name: transformers license: apache-2.0 base_model: google/flan-t5-small tags: - generated_from_trainer metrics: - rouge model-index: - name: flan-t5-small-samsum results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # flan-t5-small-samsum This model is a fine-tuned version of [google/flan-t5-small](https://huggingface.co/google/flan-t5-small) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.6228 - Rouge1: 39.5688 - Rouge2: 17.0668 - Rougel: 33.0094 - Rougelsum: 36.2832 - Gen Len: 16.1728 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 8 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:| | 1.906 | 1.0 | 185 | 1.6361 | 39.3548 | 17.0322 | 33.2853 | 36.1753 | 16.2593 | | 1.8529 | 2.0 | 370 | 1.6235 | 39.3493 | 16.8943 | 32.7354 | 36.1759 | 15.9630 | | 1.7972 | 3.0 | 555 | 1.6228 | 39.5688 | 17.0668 | 33.0094 | 36.2832 | 16.1728 | ### Framework versions - Transformers 4.52.3 - Pytorch 2.6.0 - Datasets 3.6.0 - Tokenizers 0.21.1
FBK-MT/fama-medium-asr
FBK-MT
2025-05-30T08:15:16Z
7
0
null
[ "safetensors", "conformer_encoder_decoder", "speech", "speech recognition", "ASR", "custom_code", "en", "it", "dataset:FBK-MT/mosel", "dataset:facebook/covost2", "dataset:openslr/librispeech_asr", "dataset:facebook/voxpopuli", "arxiv:2505.22759", "license:cc-by-4.0", "region:us" ]
null
2025-02-03T10:18:51Z
--- license: cc-by-4.0 language: - en - it datasets: - FBK-MT/mosel - facebook/covost2 - openslr/librispeech_asr - facebook/voxpopuli metrics: - wer tags: - speech - speech recognition - ASR --- # FAMA-medium-asr <div> <img src="FAMA.png" width="100%" alt="FAMA" /> </div> ## Table of Contents 1. [Overview](#overview) 2. [Usage](#Usage) 3. [Results](#Results) 4. [License](#license) 5. [Citation](#citation) ## Overview FAMA is the first family of large-scale open-science SFMs for English and Italian trained on [over 150k hours of exclusively open-source(OS)-compliant speech data](https://huggingface.co/datasets/FBK-MT/fama-data). FAMA models achieve [remarkable results](#results), with ASR and ST improvements on average across languages compared to OWSM, and is competitive in terms of ASR performance with the Whisper model family while being up to 8 times faster. All the artifacts used for realizing FAMA models, including codebase, datasets, and models themself are [released under OS-compliant licenses](#license), promoting a more responsible creation of models in our community. It is available in 2 sizes, with 2 variants for ASR only: - [FAMA-small](https://huggingface.co/FBK-MT/fama-small) - 475 million parameters - [FAMA-medium](https://huggingface.co/FBK-MT/fama-medium) - 878 million parameters - [FAMA-small-asr](https://huggingface.co/FBK-MT/fama-small-asr) - 475 million parameters - [FAMA-medium-asr](https://huggingface.co/FBK-MT/fama-medium-asr) - 878 million parameters For more information about FAMA, please check our [blog post](https://huggingface.co/blog/FAMA/release) and the [arXiv](https://arxiv.org/abs/2505.22759) preprint. ## Usage FAMA models are supported in Hugging Face 🤗 Transformers. To run the model, first install the Transformers and Datasets libraries. ```sh pip install transformers==4.48.1 datasets ``` To perform a single inference on a sample audio file using the [`pipeline`](https://huggingface.co/docs/transformers/main_classes/pipelines#transformers.AutomaticSpeechRecognitionPipeline) class, run: ```python import torch from transformers import AutoProcessor, pipeline from datasets import load_dataset model_id = "FBK-MT/fama-medium-asr" processor = AutoProcessor.from_pretrained(model_id) device = "cuda:0" if torch.cuda.is_available() else "cpu" tgt_lang = "en" # Force the model to start with the language tag lang_tag = "<lang:{}>".format(tgt_lang) lang_tag_id = processor.tokenizer.convert_tokens_to_ids(lang_tag) generate_kwargs = {"num_beams": 5, "no_repeat_ngram_size": 5, "forced_bos_token_id": lang_tag_id} pipe = pipeline( "automatic-speech-recognition", model=model_id, trust_remote_code=True, torch_dtype=torch.float32, device=device, return_timestamps=False, generate_kwargs=generate_kwargs ) dataset = load_dataset("distil-whisper/librispeech_asr_dummy", "clean", split="validation") sample = dataset[0]["audio"] result = pipe(sample) print(result["text"]) ``` Where `tgt_lang` is the target language (either `en` or `it`). The source languages has not to be specified. To run the inference on a local audio file `audio.wav`, call the pipeline with: ```python result = pipe("audio.wav") ``` To perform a batch inference with size `batch_size`, run: ```python result = pipe(["audio_1.wav", "audio_2.wav"], batch_size=2) ``` For the inference, we suggest converting the audio files in wav format with 16kHz sampling rate and 1 channel. ## Results We evaluate FAMA-ASR on ASR using popular open-source datasets such as CommonVoice, Multilingual LibriSpeech (MLS), and VoxPopuli. The metric used is WER (↓). We also benchmark FAMA in terms of computational time and maximum batch size supported on HuggingFace against Whisper and SeamlessM4T models. The metric used is the inverse real time factor (xRTF). **Key highlights:** - FAMA achieves up to 4.2 WER improvement on average across languages compared to OWSM v3.1 - FAMA is up to 8 times faster than Whisper large-v3 while achieving comparable performance ### Automatic Speech Recogniton (ASR) | ***Model/Dataset WER (↓)*** | **CommonVoice**-*en* | **CommonVoice**-*it* | **MLS**-*en* | **MLS**-*it* | **VoxPopuli**-*en* | **VoxPopuli**-*it* | **AVG**-*en* | **AVG**-*it* | |-----------------------------------------|---------|---------|---------|---------|---------|----------|---------|----------| | Whisper *medium* | 14.5 | 10.4 | 14.2 | 15.9 | 8.1 | 26.8 | 12.3 | 17.7 | | Whisper *large-v3* | 11.2 | 6.5 | **5.0** | 8.8 | 7.1 | 18.8 | 7.8 | 11.4 | | OWSM v3.1 *medium* | 11.9 | 12.5 | 6.6 | 19.3 | 8.4 | 24.0 | 9.0 | 18.6 | | SeamlessM4T *medium* | 10.7 | 7.8 | 8.8 | 11.3 | 10.2 | 18.2 | 9.9 | 12.4 | | SeamlessM4T *v2-large* | **7.7** | **5.0** | 6.4 | **8.5** | **6.9** | 16.6 | **7.0** | **10.0** | | FAMA-ASR *small* | 13.8 | 8.9 | 5.8 | 12.6 | 7.2 | 15.7 | 8.9 | 12.4 | | FAMA-ASR *medium* | 11.7 | 7.1 | 5.1 | 12.2 | 7.0 | 15.9 | 7.9 | 11.7 | | FAMA *small* | 13.7 | 8.6 | 5.8 | 12.8 | 7.3 | **15.6** | 8.9 | 12.3 | | FAMA *medium* | 11.5 | 7.0 | 5.2 | 13.9 | 7.2 | 15.9 | 8.0 | 12.3 | ### Computational Time and Maximum Batch Size | ***Model*** | ***Batch Size*** | ***xRTF en (↑)*** | ***xRTF it (↑)*** | ***xRTF AVG (↑)*** | |------------------------|------------|-------------|-------------|--------------| | Whisper *medium* | 8 | 13.3 | 10.9 | 12.1 | | Whisper *large-v3* | 4 | 7.9 | 6.5 | 7.2 | | SeamlessM4T *medium* | 2 | 28.5 | 26.2 | 27.4 | | SeamlessM4T *v2-large* | 2 | 13.7 | 13.3 | 13.5 | | FAMA *small* | 16 | **57.4** | **56.0** | **56.7** | | FAMA *medium* | 8 | 39.5 | 41.2 | 40.4 | ## License We release the FAMA model weights, and training data under the CC-BY 4.0 license. The training data can be found in [FAMA Training Data](https://huggingface.co/datasets/FBK-MT/fama-data). The [original FBK-fairseq codebase](https://github.com/hlt-mt/FBK-fairseq) used to train the model is released under the Apache 2.0 license. ## Citation If you use FAMA in your work, please cite: ``` @misc{papi2025fama, title={FAMA: The First Large-Scale Open-Science Speech Foundation Model for English and Italian}, author={Sara Papi and Marco Gaido and Luisa Bentivogli and Alessio Brutti and Mauro Cettolo and Roberto Gretter and Marco Matassoni and Mohamed Nabih and Matteo Negri}, year={2025} } ```
RoyRoyRpy/test_fine-tuned-visionllama_100_epo1
RoyRoyRpy
2025-05-30T08:14:34Z
0
0
peft
[ "peft", "tensorboard", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:meta-llama/Llama-3.2-11B-Vision-Instruct", "base_model:adapter:meta-llama/Llama-3.2-11B-Vision-Instruct", "license:llama3.2", "region:us" ]
null
2025-05-30T08:14:08Z
--- library_name: peft license: llama3.2 base_model: meta-llama/Llama-3.2-11B-Vision-Instruct tags: - trl - sft - generated_from_trainer model-index: - name: test_fine-tuned-visionllama_100_epo1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # test_fine-tuned-visionllama_100_epo1 This model is a fine-tuned version of [meta-llama/Llama-3.2-11B-Vision-Instruct](https://huggingface.co/meta-llama/Llama-3.2-11B-Vision-Instruct) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 10 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 80 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 1 ### Training results ### Framework versions - PEFT 0.13.0 - Transformers 4.45.1 - Pytorch 2.4.0+cu121 - Datasets 3.0.1 - Tokenizers 0.20.3
FBK-MT/fama-medium
FBK-MT
2025-05-30T08:14:10Z
5
1
null
[ "safetensors", "conformer_encoder_decoder", "speech", "speech recognition", "speech translation", "ASR", "ST", "custom_code", "en", "it", "dataset:FBK-MT/mosel", "dataset:facebook/covost2", "dataset:openslr/librispeech_asr", "dataset:facebook/voxpopuli", "arxiv:2505.22759", "license:cc-by-4.0", "region:us" ]
null
2025-03-31T17:01:17Z
--- license: cc-by-4.0 language: - en - it datasets: - FBK-MT/mosel - facebook/covost2 - openslr/librispeech_asr - facebook/voxpopuli metrics: - comet - wer tags: - speech - speech recognition - speech translation - ASR - ST --- # FAMA-medium <div> <img src="FAMA.png" width="100%" alt="FAMA" /> </div> ## Table of Contents 1. [Overview](#overview) 2. [Usage](#Usage) 3. [Results](#Results) 4. [License](#license) 5. [Citation](#citation) ## Overview FAMA is the first family of large-scale open-science SFMs for English and Italian trained on [over 150k hours of exclusively open-source(OS)-compliant speech data](https://huggingface.co/datasets/FBK-MT/fama-data). FAMA models achieve [remarkable results](#results), with ASR and ST improvements on average across languages compared to OWSM, and is competitive in terms of ASR performance with the Whisper model family while being up to 8 times faster. All the artifacts used for realizing FAMA models, including codebase, datasets, and models themself are [released under OS-compliant licenses](#license), promoting a more responsible creation of models in our community. It is available in 2 sizes, with 2 variants for ASR only: - [FAMA-small](https://huggingface.co/FBK-MT/fama-small) - 475 million parameters - [FAMA-medium](https://huggingface.co/FBK-MT/fama-medium) - 878 million parameters - [FAMA-small-asr](https://huggingface.co/FBK-MT/fama-small-asr) - 475 million parameters - [FAMA-medium-asr](https://huggingface.co/FBK-MT/fama-medium-asr) - 878 million parameters For more information about FAMA, please check our [blog post](https://huggingface.co/blog/FAMA/release) and the [arXiv](https://arxiv.org/abs/2505.22759) preprint. ## Usage FAMA models are supported in Hugging Face 🤗 Transformers. To run the model, first install the Transformers and Datasets libraries. ```sh pip install transformers==4.48.1 datasets ``` To perform a single inference on a sample audio file using the [`pipeline`](https://huggingface.co/docs/transformers/main_classes/pipelines#transformers.AutomaticSpeechRecognitionPipeline) class, run: ```python import torch from transformers import AutoProcessor, pipeline from datasets import load_dataset model_id = "FBK-MT/fama-medium" processor = AutoProcessor.from_pretrained(model_id) device = "cuda:0" if torch.cuda.is_available() else "cpu" tgt_lang = "en" # Force the model to start with the language tag lang_tag = "<lang:{}>".format(tgt_lang) lang_tag_id = processor.tokenizer.convert_tokens_to_ids(lang_tag) generate_kwargs = {"num_beams": 5, "no_repeat_ngram_size": 5, "forced_bos_token_id": lang_tag_id} pipe = pipeline( "automatic-speech-recognition", model=model_id, trust_remote_code=True, torch_dtype=torch.float32, device=device, return_timestamps=False, generate_kwargs=generate_kwargs ) dataset = load_dataset("distil-whisper/librispeech_asr_dummy", "clean", split="validation") sample = dataset[0]["audio"] result = pipe(sample) print(result["text"]) ``` Where `tgt_lang` is the target language (either `en` or `it`). The source languages has not to be specified. To run the inference on a local audio file `audio.wav`, call the pipeline with: ```python result = pipe("audio.wav") ``` To perform a batch inference with size `batch_size`, run: ```python result = pipe(["audio_1.wav", "audio_2.wav"], batch_size=2) ``` For the inference, we suggest converting the audio files in wav format with 16kHz sampling rate and 1 channel. ## Results We evaluate FAMA on ASR and ST tasks using popular open-source datasets such as CommonVoice, Multilingual LibriSpeech (MLS), VoxPopuli, CoVoST2 and FLEURS. The metrics used are WER (↓) for ASR, and COMET (↑) for ST. We also benchmark FAMA in terms of computational time and maximum batch size supported on HuggingFace against Whisper and SeamlessM4T models. The metric used is the inverse real time factor (xRTF). **Key highlights:** - FAMA achieves up to 4.2 WER and 0.152 COMET improvement on average across languages compared to OWSM v3.1 - FAMA is up to 8 times faster than Whisper large-v3 while achieving comparable ASR performance ### Automatic Speech Recogniton (ASR) | ***Model/Dataset WER (↓)*** | **CommonVoice**-*en* | **CommonVoice**-*it* | **MLS**-*en* | **MLS**-*it* | **VoxPopuli**-*en* | **VoxPopuli**-*it* | **AVG**-*en* | **AVG**-*it* | |-----------------------------------------|---------|---------|---------|---------|---------|----------|---------|----------| | Whisper *medium* | 14.5 | 10.4 | 14.2 | 15.9 | 8.1 | 26.8 | 12.3 | 17.7 | | Whisper *large-v3* | 11.2 | 6.5 | **5.0** | 8.8 | 7.1 | 18.8 | 7.8 | 11.4 | | OWSM v3.1 *medium* | 11.9 | 12.5 | 6.6 | 19.3 | 8.4 | 24.0 | 9.0 | 18.6 | | SeamlessM4T *medium* | 10.7 | 7.8 | 8.8 | 11.3 | 10.2 | 18.2 | 9.9 | 12.4 | | SeamlessM4T *v2-large* | **7.7** | **5.0** | 6.4 | **8.5** | **6.9** | 16.6 | **7.0** | **10.0** | | FAMA-ASR *small* | 13.8 | 8.9 | 5.8 | 12.6 | 7.2 | 15.7 | 8.9 | 12.4 | | FAMA-ASR *medium* | 11.7 | 7.1 | 5.1 | 12.2 | 7.0 | 15.9 | 7.9 | 11.7 | | FAMA *small* | 13.7 | 8.6 | 5.8 | 12.8 | 7.3 | **15.6** | 8.9 | 12.3 | | FAMA *medium* | 11.5 | 7.0 | 5.2 | 13.9 | 7.2 | 15.9 | 8.0 | 12.3 | ### Speech Translation (ST) | ***Model/Dataset WER (↓)*** | **CoVoST2**-*it→en* | **FLEURS**-*en→it* | |-----------------------------------------|---------------------|--------------------| | Whisper *medium* | 0.801 | - | | Whisper *large-v3* | 0.825 | - | | OWSM v3.1 *medium* | 0.636 | 0.337 | | SeamlessM4T *medium* | 0.831 | 0.820 | | SeamlessM4T *v2-large* | **0.852** | **0.855** | | FAMA *small* | 0.774 | 0.807 | | FAMA *medium* | 0.787 | 0.821 | ### Computational Time and Maximum Batch Size | ***Model*** | ***Batch Size*** | ***xRTF en (↑)*** | ***xRTF it (↑)*** | ***xRTF AVG (↑)*** | |------------------------|------------|-------------|-------------|--------------| | Whisper *medium* | 8 | 13.3 | 10.9 | 12.1 | | Whisper *large-v3* | 4 | 7.9 | 6.5 | 7.2 | | SeamlessM4T *medium* | 2 | 28.5 | 26.2 | 27.4 | | SeamlessM4T *v2-large* | 2 | 13.7 | 13.3 | 13.5 | | FAMA *small* | 16 | **57.4** | **56.0** | **56.7** | | FAMA *medium* | 8 | 39.5 | 41.2 | 40.4 | ## License We release the FAMA model weights, and training data under the CC-BY 4.0 license. The training data can be found in [FAMA Training Data](https://huggingface.co/datasets/FBK-MT/fama-data). The [original FBK-fairseq codebase](https://github.com/hlt-mt/FBK-fairseq) used to train the model is released under the Apache 2.0 license. ## Citation If you use FAMA in your work, please cite: ``` @misc{papi2025fama, title={FAMA: The First Large-Scale Open-Science Speech Foundation Model for English and Italian}, author={Sara Papi and Marco Gaido and Luisa Bentivogli and Alessio Brutti and Mauro Cettolo and Roberto Gretter and Marco Matassoni and Mohamed Nabih and Matteo Negri}, year={2025} } ```
growwgm/VBNG
growwgm
2025-05-30T08:13:31Z
0
0
null
[ "license:bigscience-bloom-rail-1.0", "region:us" ]
null
2025-05-30T08:13:31Z
--- license: bigscience-bloom-rail-1.0 ---
Jackmin108/qwen-7b-rl-step-32
Jackmin108
2025-05-30T08:13:24Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "arxiv:2501.12948", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-30T05:04:09Z
--- license: mit library_name: transformers --- # DeepSeek-R1 <!-- markdownlint-disable first-line-h1 --> <!-- markdownlint-disable html --> <!-- markdownlint-disable no-duplicate-header --> <div align="center"> <img src="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/logo.svg?raw=true" width="60%" alt="DeepSeek-V3" /> </div> <hr> <div align="center" style="line-height: 1;"> <a href="https://www.deepseek.com/" target="_blank" style="margin: 2px;"> <img alt="Homepage" src="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/badge.svg?raw=true" style="display: inline-block; vertical-align: middle;"/> </a> <a href="https://chat.deepseek.com/" target="_blank" style="margin: 2px;"> <img alt="Chat" src="https://img.shields.io/badge/🤖%20Chat-DeepSeek%20R1-536af5?color=536af5&logoColor=white" style="display: inline-block; vertical-align: middle;"/> </a> <a href="https://huggingface.co/deepseek-ai" target="_blank" style="margin: 2px;"> <img alt="Hugging Face" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-DeepSeek%20AI-ffc107?color=ffc107&logoColor=white" style="display: inline-block; vertical-align: middle;"/> </a> </div> <div align="center" style="line-height: 1;"> <a href="https://discord.gg/Tc7c45Zzu5" target="_blank" style="margin: 2px;"> <img alt="Discord" src="https://img.shields.io/badge/Discord-DeepSeek%20AI-7289da?logo=discord&logoColor=white&color=7289da" style="display: inline-block; vertical-align: middle;"/> </a> <a href="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/qr.jpeg?raw=true" target="_blank" style="margin: 2px;"> <img alt="Wechat" src="https://img.shields.io/badge/WeChat-DeepSeek%20AI-brightgreen?logo=wechat&logoColor=white" style="display: inline-block; vertical-align: middle;"/> </a> <a href="https://twitter.com/deepseek_ai" target="_blank" style="margin: 2px;"> <img alt="Twitter Follow" src="https://img.shields.io/badge/Twitter-deepseek_ai-white?logo=x&logoColor=white" style="display: inline-block; vertical-align: middle;"/> </a> </div> <div align="center" style="line-height: 1;"> <a href="https://github.com/deepseek-ai/DeepSeek-R1/blob/main/LICENSE" style="margin: 2px;"> <img alt="License" src="https://img.shields.io/badge/License-MIT-f5de53?&color=f5de53" style="display: inline-block; vertical-align: middle;"/> </a> </div> <p align="center"> <a href="https://github.com/deepseek-ai/DeepSeek-R1/blob/main/DeepSeek_R1.pdf"><b>Paper Link</b>👁️</a> </p> ## 1. Introduction We introduce our first-generation reasoning models, DeepSeek-R1-Zero and DeepSeek-R1. DeepSeek-R1-Zero, a model trained via large-scale reinforcement learning (RL) without supervised fine-tuning (SFT) as a preliminary step, demonstrated remarkable performance on reasoning. With RL, DeepSeek-R1-Zero naturally emerged with numerous powerful and interesting reasoning behaviors. However, DeepSeek-R1-Zero encounters challenges such as endless repetition, poor readability, and language mixing. To address these issues and further enhance reasoning performance, we introduce DeepSeek-R1, which incorporates cold-start data before RL. DeepSeek-R1 achieves performance comparable to OpenAI-o1 across math, code, and reasoning tasks. To support the research community, we have open-sourced DeepSeek-R1-Zero, DeepSeek-R1, and six dense models distilled from DeepSeek-R1 based on Llama and Qwen. DeepSeek-R1-Distill-Qwen-32B outperforms OpenAI-o1-mini across various benchmarks, achieving new state-of-the-art results for dense models. **NOTE: Before running DeepSeek-R1 series models locally, we kindly recommend reviewing the [Usage Recommendation](#usage-recommendations) section.** <p align="center"> <img width="80%" src="figures/benchmark.jpg"> </p> ## 2. Model Summary --- **Post-Training: Large-Scale Reinforcement Learning on the Base Model** - We directly apply reinforcement learning (RL) to the base model without relying on supervised fine-tuning (SFT) as a preliminary step. This approach allows the model to explore chain-of-thought (CoT) for solving complex problems, resulting in the development of DeepSeek-R1-Zero. DeepSeek-R1-Zero demonstrates capabilities such as self-verification, reflection, and generating long CoTs, marking a significant milestone for the research community. Notably, it is the first open research to validate that reasoning capabilities of LLMs can be incentivized purely through RL, without the need for SFT. This breakthrough paves the way for future advancements in this area. - We introduce our pipeline to develop DeepSeek-R1. The pipeline incorporates two RL stages aimed at discovering improved reasoning patterns and aligning with human preferences, as well as two SFT stages that serve as the seed for the model's reasoning and non-reasoning capabilities. We believe the pipeline will benefit the industry by creating better models. --- **Distillation: Smaller Models Can Be Powerful Too** - We demonstrate that the reasoning patterns of larger models can be distilled into smaller models, resulting in better performance compared to the reasoning patterns discovered through RL on small models. The open source DeepSeek-R1, as well as its API, will benefit the research community to distill better smaller models in the future. - Using the reasoning data generated by DeepSeek-R1, we fine-tuned several dense models that are widely used in the research community. The evaluation results demonstrate that the distilled smaller dense models perform exceptionally well on benchmarks. We open-source distilled 1.5B, 7B, 8B, 14B, 32B, and 70B checkpoints based on Qwen2.5 and Llama3 series to the community. ## 3. Model Downloads ### DeepSeek-R1 Models <div align="center"> | **Model** | **#Total Params** | **#Activated Params** | **Context Length** | **Download** | | :------------: | :------------: | :------------: | :------------: | :------------: | | DeepSeek-R1-Zero | 671B | 37B | 128K | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Zero) | | DeepSeek-R1 | 671B | 37B | 128K | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1) | </div> DeepSeek-R1-Zero & DeepSeek-R1 are trained based on DeepSeek-V3-Base. For more details regarding the model architecture, please refer to [DeepSeek-V3](https://github.com/deepseek-ai/DeepSeek-V3) repository. ### DeepSeek-R1-Distill Models <div align="center"> | **Model** | **Base Model** | **Download** | | :------------: | :------------: | :------------: | | DeepSeek-R1-Distill-Qwen-1.5B | [Qwen2.5-Math-1.5B](https://huggingface.co/Qwen/Qwen2.5-Math-1.5B) | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B) | | DeepSeek-R1-Distill-Qwen-7B | [Qwen2.5-Math-7B](https://huggingface.co/Qwen/Qwen2.5-Math-7B) | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-7B) | | DeepSeek-R1-Distill-Llama-8B | [Llama-3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B) | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-8B) | | DeepSeek-R1-Distill-Qwen-14B | [Qwen2.5-14B](https://huggingface.co/Qwen/Qwen2.5-14B) | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-14B) | |DeepSeek-R1-Distill-Qwen-32B | [Qwen2.5-32B](https://huggingface.co/Qwen/Qwen2.5-32B) | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-32B) | | DeepSeek-R1-Distill-Llama-70B | [Llama-3.3-70B-Instruct](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct) | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-70B) | </div> DeepSeek-R1-Distill models are fine-tuned based on open-source models, using samples generated by DeepSeek-R1. We slightly change their configs and tokenizers. Please use our setting to run these models. ## 4. Evaluation Results ### DeepSeek-R1-Evaluation For all our models, the maximum generation length is set to 32,768 tokens. For benchmarks requiring sampling, we use a temperature of $0.6$, a top-p value of $0.95$, and generate 64 responses per query to estimate pass@1. <div align="center"> | Category | Benchmark (Metric) | Claude-3.5-Sonnet-1022 | GPT-4o 0513 | DeepSeek V3 | OpenAI o1-mini | OpenAI o1-1217 | DeepSeek R1 | |----------|-------------------|----------------------|------------|--------------|----------------|------------|--------------| | | Architecture | - | - | MoE | - | - | MoE | | | # Activated Params | - | - | 37B | - | - | 37B | | | # Total Params | - | - | 671B | - | - | 671B | | English | MMLU (Pass@1) | 88.3 | 87.2 | 88.5 | 85.2 | **91.8** | 90.8 | | | MMLU-Redux (EM) | 88.9 | 88.0 | 89.1 | 86.7 | - | **92.9** | | | MMLU-Pro (EM) | 78.0 | 72.6 | 75.9 | 80.3 | - | **84.0** | | | DROP (3-shot F1) | 88.3 | 83.7 | 91.6 | 83.9 | 90.2 | **92.2** | | | IF-Eval (Prompt Strict) | **86.5** | 84.3 | 86.1 | 84.8 | - | 83.3 | | | GPQA-Diamond (Pass@1) | 65.0 | 49.9 | 59.1 | 60.0 | **75.7** | 71.5 | | | SimpleQA (Correct) | 28.4 | 38.2 | 24.9 | 7.0 | **47.0** | 30.1 | | | FRAMES (Acc.) | 72.5 | 80.5 | 73.3 | 76.9 | - | **82.5** | | | AlpacaEval2.0 (LC-winrate) | 52.0 | 51.1 | 70.0 | 57.8 | - | **87.6** | | | ArenaHard (GPT-4-1106) | 85.2 | 80.4 | 85.5 | 92.0 | - | **92.3** | | Code | LiveCodeBench (Pass@1-COT) | 33.8 | 34.2 | - | 53.8 | 63.4 | **65.9** | | | Codeforces (Percentile) | 20.3 | 23.6 | 58.7 | 93.4 | **96.6** | 96.3 | | | Codeforces (Rating) | 717 | 759 | 1134 | 1820 | **2061** | 2029 | | | SWE Verified (Resolved) | **50.8** | 38.8 | 42.0 | 41.6 | 48.9 | 49.2 | | | Aider-Polyglot (Acc.) | 45.3 | 16.0 | 49.6 | 32.9 | **61.7** | 53.3 | | Math | AIME 2024 (Pass@1) | 16.0 | 9.3 | 39.2 | 63.6 | 79.2 | **79.8** | | | MATH-500 (Pass@1) | 78.3 | 74.6 | 90.2 | 90.0 | 96.4 | **97.3** | | | CNMO 2024 (Pass@1) | 13.1 | 10.8 | 43.2 | 67.6 | - | **78.8** | | Chinese | CLUEWSC (EM) | 85.4 | 87.9 | 90.9 | 89.9 | - | **92.8** | | | C-Eval (EM) | 76.7 | 76.0 | 86.5 | 68.9 | - | **91.8** | | | C-SimpleQA (Correct) | 55.4 | 58.7 | **68.0** | 40.3 | - | 63.7 | </div> ### Distilled Model Evaluation <div align="center"> | Model | AIME 2024 pass@1 | AIME 2024 cons@64 | MATH-500 pass@1 | GPQA Diamond pass@1 | LiveCodeBench pass@1 | CodeForces rating | |------------------------------------------|------------------|-------------------|-----------------|----------------------|----------------------|-------------------| | GPT-4o-0513 | 9.3 | 13.4 | 74.6 | 49.9 | 32.9 | 759 | | Claude-3.5-Sonnet-1022 | 16.0 | 26.7 | 78.3 | 65.0 | 38.9 | 717 | | o1-mini | 63.6 | 80.0 | 90.0 | 60.0 | 53.8 | **1820** | | QwQ-32B-Preview | 44.0 | 60.0 | 90.6 | 54.5 | 41.9 | 1316 | | DeepSeek-R1-Distill-Qwen-1.5B | 28.9 | 52.7 | 83.9 | 33.8 | 16.9 | 954 | | DeepSeek-R1-Distill-Qwen-7B | 55.5 | 83.3 | 92.8 | 49.1 | 37.6 | 1189 | | DeepSeek-R1-Distill-Qwen-14B | 69.7 | 80.0 | 93.9 | 59.1 | 53.1 | 1481 | | DeepSeek-R1-Distill-Qwen-32B | **72.6** | 83.3 | 94.3 | 62.1 | 57.2 | 1691 | | DeepSeek-R1-Distill-Llama-8B | 50.4 | 80.0 | 89.1 | 49.0 | 39.6 | 1205 | | DeepSeek-R1-Distill-Llama-70B | 70.0 | **86.7** | **94.5** | **65.2** | **57.5** | 1633 | </div> ## 5. Chat Website & API Platform You can chat with DeepSeek-R1 on DeepSeek's official website: [chat.deepseek.com](https://chat.deepseek.com), and switch on the button "DeepThink" We also provide OpenAI-Compatible API at DeepSeek Platform: [platform.deepseek.com](https://platform.deepseek.com/) ## 6. How to Run Locally ### DeepSeek-R1 Models Please visit [DeepSeek-V3](https://github.com/deepseek-ai/DeepSeek-V3) repo for more information about running DeepSeek-R1 locally. **NOTE: Hugging Face's Transformers has not been directly supported yet.** ### DeepSeek-R1-Distill Models DeepSeek-R1-Distill models can be utilized in the same manner as Qwen or Llama models. For instance, you can easily start a service using [vLLM](https://github.com/vllm-project/vllm): ```shell vllm serve deepseek-ai/DeepSeek-R1-Distill-Qwen-32B --tensor-parallel-size 2 --max-model-len 32768 --enforce-eager ``` You can also easily start a service using [SGLang](https://github.com/sgl-project/sglang) ```bash python3 -m sglang.launch_server --model deepseek-ai/DeepSeek-R1-Distill-Qwen-32B --trust-remote-code --tp 2 ``` ### Usage Recommendations **We recommend adhering to the following configurations when utilizing the DeepSeek-R1 series models, including benchmarking, to achieve the expected performance:** 1. Set the temperature within the range of 0.5-0.7 (0.6 is recommended) to prevent endless repetitions or incoherent outputs. 2. **Avoid adding a system prompt; all instructions should be contained within the user prompt.** 3. For mathematical problems, it is advisable to include a directive in your prompt such as: "Please reason step by step, and put your final answer within \boxed{}." 4. When evaluating model performance, it is recommended to conduct multiple tests and average the results. Additionally, we have observed that the DeepSeek-R1 series models tend to bypass thinking pattern (i.e., outputting "\<think\>\n\n\</think\>") when responding to certain queries, which can adversely affect the model's performance. **To ensure that the model engages in thorough reasoning, we recommend enforcing the model to initiate its response with "\<think\>\n" at the beginning of every output.** ## 7. License This code repository and the model weights are licensed under the [MIT License](https://github.com/deepseek-ai/DeepSeek-R1/blob/main/LICENSE). DeepSeek-R1 series support commercial use, allow for any modifications and derivative works, including, but not limited to, distillation for training other LLMs. Please note that: - DeepSeek-R1-Distill-Qwen-1.5B, DeepSeek-R1-Distill-Qwen-7B, DeepSeek-R1-Distill-Qwen-14B and DeepSeek-R1-Distill-Qwen-32B are derived from [Qwen-2.5 series](https://github.com/QwenLM/Qwen2.5), which are originally licensed under [Apache 2.0 License](https://huggingface.co/Qwen/Qwen2.5-1.5B/blob/main/LICENSE), and now finetuned with 800k samples curated with DeepSeek-R1. - DeepSeek-R1-Distill-Llama-8B is derived from Llama3.1-8B-Base and is originally licensed under [llama3.1 license](https://huggingface.co/meta-llama/Llama-3.1-8B/blob/main/LICENSE). - DeepSeek-R1-Distill-Llama-70B is derived from Llama3.3-70B-Instruct and is originally licensed under [llama3.3 license](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct/blob/main/LICENSE). ## 8. Citation ``` @misc{deepseekai2025deepseekr1incentivizingreasoningcapability, title={DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning}, author={DeepSeek-AI}, year={2025}, eprint={2501.12948}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2501.12948}, } ``` ## 9. Contact If you have any questions, please raise an issue or contact us at [[email protected]]([email protected]).
Jackmin108/qwen-7b-rl-step-31
Jackmin108
2025-05-30T08:13:15Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "arxiv:2501.12948", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-30T05:04:02Z
--- license: mit library_name: transformers --- # DeepSeek-R1 <!-- markdownlint-disable first-line-h1 --> <!-- markdownlint-disable html --> <!-- markdownlint-disable no-duplicate-header --> <div align="center"> <img src="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/logo.svg?raw=true" width="60%" alt="DeepSeek-V3" /> </div> <hr> <div align="center" style="line-height: 1;"> <a href="https://www.deepseek.com/" target="_blank" style="margin: 2px;"> <img alt="Homepage" src="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/badge.svg?raw=true" style="display: inline-block; vertical-align: middle;"/> </a> <a href="https://chat.deepseek.com/" target="_blank" style="margin: 2px;"> <img alt="Chat" src="https://img.shields.io/badge/🤖%20Chat-DeepSeek%20R1-536af5?color=536af5&logoColor=white" style="display: inline-block; vertical-align: middle;"/> </a> <a href="https://huggingface.co/deepseek-ai" target="_blank" style="margin: 2px;"> <img alt="Hugging Face" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-DeepSeek%20AI-ffc107?color=ffc107&logoColor=white" style="display: inline-block; vertical-align: middle;"/> </a> </div> <div align="center" style="line-height: 1;"> <a href="https://discord.gg/Tc7c45Zzu5" target="_blank" style="margin: 2px;"> <img alt="Discord" src="https://img.shields.io/badge/Discord-DeepSeek%20AI-7289da?logo=discord&logoColor=white&color=7289da" style="display: inline-block; vertical-align: middle;"/> </a> <a href="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/qr.jpeg?raw=true" target="_blank" style="margin: 2px;"> <img alt="Wechat" src="https://img.shields.io/badge/WeChat-DeepSeek%20AI-brightgreen?logo=wechat&logoColor=white" style="display: inline-block; vertical-align: middle;"/> </a> <a href="https://twitter.com/deepseek_ai" target="_blank" style="margin: 2px;"> <img alt="Twitter Follow" src="https://img.shields.io/badge/Twitter-deepseek_ai-white?logo=x&logoColor=white" style="display: inline-block; vertical-align: middle;"/> </a> </div> <div align="center" style="line-height: 1;"> <a href="https://github.com/deepseek-ai/DeepSeek-R1/blob/main/LICENSE" style="margin: 2px;"> <img alt="License" src="https://img.shields.io/badge/License-MIT-f5de53?&color=f5de53" style="display: inline-block; vertical-align: middle;"/> </a> </div> <p align="center"> <a href="https://github.com/deepseek-ai/DeepSeek-R1/blob/main/DeepSeek_R1.pdf"><b>Paper Link</b>👁️</a> </p> ## 1. Introduction We introduce our first-generation reasoning models, DeepSeek-R1-Zero and DeepSeek-R1. DeepSeek-R1-Zero, a model trained via large-scale reinforcement learning (RL) without supervised fine-tuning (SFT) as a preliminary step, demonstrated remarkable performance on reasoning. With RL, DeepSeek-R1-Zero naturally emerged with numerous powerful and interesting reasoning behaviors. However, DeepSeek-R1-Zero encounters challenges such as endless repetition, poor readability, and language mixing. To address these issues and further enhance reasoning performance, we introduce DeepSeek-R1, which incorporates cold-start data before RL. DeepSeek-R1 achieves performance comparable to OpenAI-o1 across math, code, and reasoning tasks. To support the research community, we have open-sourced DeepSeek-R1-Zero, DeepSeek-R1, and six dense models distilled from DeepSeek-R1 based on Llama and Qwen. DeepSeek-R1-Distill-Qwen-32B outperforms OpenAI-o1-mini across various benchmarks, achieving new state-of-the-art results for dense models. **NOTE: Before running DeepSeek-R1 series models locally, we kindly recommend reviewing the [Usage Recommendation](#usage-recommendations) section.** <p align="center"> <img width="80%" src="figures/benchmark.jpg"> </p> ## 2. Model Summary --- **Post-Training: Large-Scale Reinforcement Learning on the Base Model** - We directly apply reinforcement learning (RL) to the base model without relying on supervised fine-tuning (SFT) as a preliminary step. This approach allows the model to explore chain-of-thought (CoT) for solving complex problems, resulting in the development of DeepSeek-R1-Zero. DeepSeek-R1-Zero demonstrates capabilities such as self-verification, reflection, and generating long CoTs, marking a significant milestone for the research community. Notably, it is the first open research to validate that reasoning capabilities of LLMs can be incentivized purely through RL, without the need for SFT. This breakthrough paves the way for future advancements in this area. - We introduce our pipeline to develop DeepSeek-R1. The pipeline incorporates two RL stages aimed at discovering improved reasoning patterns and aligning with human preferences, as well as two SFT stages that serve as the seed for the model's reasoning and non-reasoning capabilities. We believe the pipeline will benefit the industry by creating better models. --- **Distillation: Smaller Models Can Be Powerful Too** - We demonstrate that the reasoning patterns of larger models can be distilled into smaller models, resulting in better performance compared to the reasoning patterns discovered through RL on small models. The open source DeepSeek-R1, as well as its API, will benefit the research community to distill better smaller models in the future. - Using the reasoning data generated by DeepSeek-R1, we fine-tuned several dense models that are widely used in the research community. The evaluation results demonstrate that the distilled smaller dense models perform exceptionally well on benchmarks. We open-source distilled 1.5B, 7B, 8B, 14B, 32B, and 70B checkpoints based on Qwen2.5 and Llama3 series to the community. ## 3. Model Downloads ### DeepSeek-R1 Models <div align="center"> | **Model** | **#Total Params** | **#Activated Params** | **Context Length** | **Download** | | :------------: | :------------: | :------------: | :------------: | :------------: | | DeepSeek-R1-Zero | 671B | 37B | 128K | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Zero) | | DeepSeek-R1 | 671B | 37B | 128K | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1) | </div> DeepSeek-R1-Zero & DeepSeek-R1 are trained based on DeepSeek-V3-Base. For more details regarding the model architecture, please refer to [DeepSeek-V3](https://github.com/deepseek-ai/DeepSeek-V3) repository. ### DeepSeek-R1-Distill Models <div align="center"> | **Model** | **Base Model** | **Download** | | :------------: | :------------: | :------------: | | DeepSeek-R1-Distill-Qwen-1.5B | [Qwen2.5-Math-1.5B](https://huggingface.co/Qwen/Qwen2.5-Math-1.5B) | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B) | | DeepSeek-R1-Distill-Qwen-7B | [Qwen2.5-Math-7B](https://huggingface.co/Qwen/Qwen2.5-Math-7B) | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-7B) | | DeepSeek-R1-Distill-Llama-8B | [Llama-3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B) | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-8B) | | DeepSeek-R1-Distill-Qwen-14B | [Qwen2.5-14B](https://huggingface.co/Qwen/Qwen2.5-14B) | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-14B) | |DeepSeek-R1-Distill-Qwen-32B | [Qwen2.5-32B](https://huggingface.co/Qwen/Qwen2.5-32B) | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-32B) | | DeepSeek-R1-Distill-Llama-70B | [Llama-3.3-70B-Instruct](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct) | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-70B) | </div> DeepSeek-R1-Distill models are fine-tuned based on open-source models, using samples generated by DeepSeek-R1. We slightly change their configs and tokenizers. Please use our setting to run these models. ## 4. Evaluation Results ### DeepSeek-R1-Evaluation For all our models, the maximum generation length is set to 32,768 tokens. For benchmarks requiring sampling, we use a temperature of $0.6$, a top-p value of $0.95$, and generate 64 responses per query to estimate pass@1. <div align="center"> | Category | Benchmark (Metric) | Claude-3.5-Sonnet-1022 | GPT-4o 0513 | DeepSeek V3 | OpenAI o1-mini | OpenAI o1-1217 | DeepSeek R1 | |----------|-------------------|----------------------|------------|--------------|----------------|------------|--------------| | | Architecture | - | - | MoE | - | - | MoE | | | # Activated Params | - | - | 37B | - | - | 37B | | | # Total Params | - | - | 671B | - | - | 671B | | English | MMLU (Pass@1) | 88.3 | 87.2 | 88.5 | 85.2 | **91.8** | 90.8 | | | MMLU-Redux (EM) | 88.9 | 88.0 | 89.1 | 86.7 | - | **92.9** | | | MMLU-Pro (EM) | 78.0 | 72.6 | 75.9 | 80.3 | - | **84.0** | | | DROP (3-shot F1) | 88.3 | 83.7 | 91.6 | 83.9 | 90.2 | **92.2** | | | IF-Eval (Prompt Strict) | **86.5** | 84.3 | 86.1 | 84.8 | - | 83.3 | | | GPQA-Diamond (Pass@1) | 65.0 | 49.9 | 59.1 | 60.0 | **75.7** | 71.5 | | | SimpleQA (Correct) | 28.4 | 38.2 | 24.9 | 7.0 | **47.0** | 30.1 | | | FRAMES (Acc.) | 72.5 | 80.5 | 73.3 | 76.9 | - | **82.5** | | | AlpacaEval2.0 (LC-winrate) | 52.0 | 51.1 | 70.0 | 57.8 | - | **87.6** | | | ArenaHard (GPT-4-1106) | 85.2 | 80.4 | 85.5 | 92.0 | - | **92.3** | | Code | LiveCodeBench (Pass@1-COT) | 33.8 | 34.2 | - | 53.8 | 63.4 | **65.9** | | | Codeforces (Percentile) | 20.3 | 23.6 | 58.7 | 93.4 | **96.6** | 96.3 | | | Codeforces (Rating) | 717 | 759 | 1134 | 1820 | **2061** | 2029 | | | SWE Verified (Resolved) | **50.8** | 38.8 | 42.0 | 41.6 | 48.9 | 49.2 | | | Aider-Polyglot (Acc.) | 45.3 | 16.0 | 49.6 | 32.9 | **61.7** | 53.3 | | Math | AIME 2024 (Pass@1) | 16.0 | 9.3 | 39.2 | 63.6 | 79.2 | **79.8** | | | MATH-500 (Pass@1) | 78.3 | 74.6 | 90.2 | 90.0 | 96.4 | **97.3** | | | CNMO 2024 (Pass@1) | 13.1 | 10.8 | 43.2 | 67.6 | - | **78.8** | | Chinese | CLUEWSC (EM) | 85.4 | 87.9 | 90.9 | 89.9 | - | **92.8** | | | C-Eval (EM) | 76.7 | 76.0 | 86.5 | 68.9 | - | **91.8** | | | C-SimpleQA (Correct) | 55.4 | 58.7 | **68.0** | 40.3 | - | 63.7 | </div> ### Distilled Model Evaluation <div align="center"> | Model | AIME 2024 pass@1 | AIME 2024 cons@64 | MATH-500 pass@1 | GPQA Diamond pass@1 | LiveCodeBench pass@1 | CodeForces rating | |------------------------------------------|------------------|-------------------|-----------------|----------------------|----------------------|-------------------| | GPT-4o-0513 | 9.3 | 13.4 | 74.6 | 49.9 | 32.9 | 759 | | Claude-3.5-Sonnet-1022 | 16.0 | 26.7 | 78.3 | 65.0 | 38.9 | 717 | | o1-mini | 63.6 | 80.0 | 90.0 | 60.0 | 53.8 | **1820** | | QwQ-32B-Preview | 44.0 | 60.0 | 90.6 | 54.5 | 41.9 | 1316 | | DeepSeek-R1-Distill-Qwen-1.5B | 28.9 | 52.7 | 83.9 | 33.8 | 16.9 | 954 | | DeepSeek-R1-Distill-Qwen-7B | 55.5 | 83.3 | 92.8 | 49.1 | 37.6 | 1189 | | DeepSeek-R1-Distill-Qwen-14B | 69.7 | 80.0 | 93.9 | 59.1 | 53.1 | 1481 | | DeepSeek-R1-Distill-Qwen-32B | **72.6** | 83.3 | 94.3 | 62.1 | 57.2 | 1691 | | DeepSeek-R1-Distill-Llama-8B | 50.4 | 80.0 | 89.1 | 49.0 | 39.6 | 1205 | | DeepSeek-R1-Distill-Llama-70B | 70.0 | **86.7** | **94.5** | **65.2** | **57.5** | 1633 | </div> ## 5. Chat Website & API Platform You can chat with DeepSeek-R1 on DeepSeek's official website: [chat.deepseek.com](https://chat.deepseek.com), and switch on the button "DeepThink" We also provide OpenAI-Compatible API at DeepSeek Platform: [platform.deepseek.com](https://platform.deepseek.com/) ## 6. How to Run Locally ### DeepSeek-R1 Models Please visit [DeepSeek-V3](https://github.com/deepseek-ai/DeepSeek-V3) repo for more information about running DeepSeek-R1 locally. **NOTE: Hugging Face's Transformers has not been directly supported yet.** ### DeepSeek-R1-Distill Models DeepSeek-R1-Distill models can be utilized in the same manner as Qwen or Llama models. For instance, you can easily start a service using [vLLM](https://github.com/vllm-project/vllm): ```shell vllm serve deepseek-ai/DeepSeek-R1-Distill-Qwen-32B --tensor-parallel-size 2 --max-model-len 32768 --enforce-eager ``` You can also easily start a service using [SGLang](https://github.com/sgl-project/sglang) ```bash python3 -m sglang.launch_server --model deepseek-ai/DeepSeek-R1-Distill-Qwen-32B --trust-remote-code --tp 2 ``` ### Usage Recommendations **We recommend adhering to the following configurations when utilizing the DeepSeek-R1 series models, including benchmarking, to achieve the expected performance:** 1. Set the temperature within the range of 0.5-0.7 (0.6 is recommended) to prevent endless repetitions or incoherent outputs. 2. **Avoid adding a system prompt; all instructions should be contained within the user prompt.** 3. For mathematical problems, it is advisable to include a directive in your prompt such as: "Please reason step by step, and put your final answer within \boxed{}." 4. When evaluating model performance, it is recommended to conduct multiple tests and average the results. Additionally, we have observed that the DeepSeek-R1 series models tend to bypass thinking pattern (i.e., outputting "\<think\>\n\n\</think\>") when responding to certain queries, which can adversely affect the model's performance. **To ensure that the model engages in thorough reasoning, we recommend enforcing the model to initiate its response with "\<think\>\n" at the beginning of every output.** ## 7. License This code repository and the model weights are licensed under the [MIT License](https://github.com/deepseek-ai/DeepSeek-R1/blob/main/LICENSE). DeepSeek-R1 series support commercial use, allow for any modifications and derivative works, including, but not limited to, distillation for training other LLMs. Please note that: - DeepSeek-R1-Distill-Qwen-1.5B, DeepSeek-R1-Distill-Qwen-7B, DeepSeek-R1-Distill-Qwen-14B and DeepSeek-R1-Distill-Qwen-32B are derived from [Qwen-2.5 series](https://github.com/QwenLM/Qwen2.5), which are originally licensed under [Apache 2.0 License](https://huggingface.co/Qwen/Qwen2.5-1.5B/blob/main/LICENSE), and now finetuned with 800k samples curated with DeepSeek-R1. - DeepSeek-R1-Distill-Llama-8B is derived from Llama3.1-8B-Base and is originally licensed under [llama3.1 license](https://huggingface.co/meta-llama/Llama-3.1-8B/blob/main/LICENSE). - DeepSeek-R1-Distill-Llama-70B is derived from Llama3.3-70B-Instruct and is originally licensed under [llama3.3 license](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct/blob/main/LICENSE). ## 8. Citation ``` @misc{deepseekai2025deepseekr1incentivizingreasoningcapability, title={DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning}, author={DeepSeek-AI}, year={2025}, eprint={2501.12948}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2501.12948}, } ``` ## 9. Contact If you have any questions, please raise an issue or contact us at [[email protected]]([email protected]).
Jackmin108/qwen-7b-rl-step-16
Jackmin108
2025-05-30T08:13:00Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "arxiv:2501.12948", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-30T05:03:54Z
--- license: mit library_name: transformers --- # DeepSeek-R1 <!-- markdownlint-disable first-line-h1 --> <!-- markdownlint-disable html --> <!-- markdownlint-disable no-duplicate-header --> <div align="center"> <img src="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/logo.svg?raw=true" width="60%" alt="DeepSeek-V3" /> </div> <hr> <div align="center" style="line-height: 1;"> <a href="https://www.deepseek.com/" target="_blank" style="margin: 2px;"> <img alt="Homepage" src="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/badge.svg?raw=true" style="display: inline-block; vertical-align: middle;"/> </a> <a href="https://chat.deepseek.com/" target="_blank" style="margin: 2px;"> <img alt="Chat" src="https://img.shields.io/badge/🤖%20Chat-DeepSeek%20R1-536af5?color=536af5&logoColor=white" style="display: inline-block; vertical-align: middle;"/> </a> <a href="https://huggingface.co/deepseek-ai" target="_blank" style="margin: 2px;"> <img alt="Hugging Face" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-DeepSeek%20AI-ffc107?color=ffc107&logoColor=white" style="display: inline-block; vertical-align: middle;"/> </a> </div> <div align="center" style="line-height: 1;"> <a href="https://discord.gg/Tc7c45Zzu5" target="_blank" style="margin: 2px;"> <img alt="Discord" src="https://img.shields.io/badge/Discord-DeepSeek%20AI-7289da?logo=discord&logoColor=white&color=7289da" style="display: inline-block; vertical-align: middle;"/> </a> <a href="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/qr.jpeg?raw=true" target="_blank" style="margin: 2px;"> <img alt="Wechat" src="https://img.shields.io/badge/WeChat-DeepSeek%20AI-brightgreen?logo=wechat&logoColor=white" style="display: inline-block; vertical-align: middle;"/> </a> <a href="https://twitter.com/deepseek_ai" target="_blank" style="margin: 2px;"> <img alt="Twitter Follow" src="https://img.shields.io/badge/Twitter-deepseek_ai-white?logo=x&logoColor=white" style="display: inline-block; vertical-align: middle;"/> </a> </div> <div align="center" style="line-height: 1;"> <a href="https://github.com/deepseek-ai/DeepSeek-R1/blob/main/LICENSE" style="margin: 2px;"> <img alt="License" src="https://img.shields.io/badge/License-MIT-f5de53?&color=f5de53" style="display: inline-block; vertical-align: middle;"/> </a> </div> <p align="center"> <a href="https://github.com/deepseek-ai/DeepSeek-R1/blob/main/DeepSeek_R1.pdf"><b>Paper Link</b>👁️</a> </p> ## 1. Introduction We introduce our first-generation reasoning models, DeepSeek-R1-Zero and DeepSeek-R1. DeepSeek-R1-Zero, a model trained via large-scale reinforcement learning (RL) without supervised fine-tuning (SFT) as a preliminary step, demonstrated remarkable performance on reasoning. With RL, DeepSeek-R1-Zero naturally emerged with numerous powerful and interesting reasoning behaviors. However, DeepSeek-R1-Zero encounters challenges such as endless repetition, poor readability, and language mixing. To address these issues and further enhance reasoning performance, we introduce DeepSeek-R1, which incorporates cold-start data before RL. DeepSeek-R1 achieves performance comparable to OpenAI-o1 across math, code, and reasoning tasks. To support the research community, we have open-sourced DeepSeek-R1-Zero, DeepSeek-R1, and six dense models distilled from DeepSeek-R1 based on Llama and Qwen. DeepSeek-R1-Distill-Qwen-32B outperforms OpenAI-o1-mini across various benchmarks, achieving new state-of-the-art results for dense models. **NOTE: Before running DeepSeek-R1 series models locally, we kindly recommend reviewing the [Usage Recommendation](#usage-recommendations) section.** <p align="center"> <img width="80%" src="figures/benchmark.jpg"> </p> ## 2. Model Summary --- **Post-Training: Large-Scale Reinforcement Learning on the Base Model** - We directly apply reinforcement learning (RL) to the base model without relying on supervised fine-tuning (SFT) as a preliminary step. This approach allows the model to explore chain-of-thought (CoT) for solving complex problems, resulting in the development of DeepSeek-R1-Zero. DeepSeek-R1-Zero demonstrates capabilities such as self-verification, reflection, and generating long CoTs, marking a significant milestone for the research community. Notably, it is the first open research to validate that reasoning capabilities of LLMs can be incentivized purely through RL, without the need for SFT. This breakthrough paves the way for future advancements in this area. - We introduce our pipeline to develop DeepSeek-R1. The pipeline incorporates two RL stages aimed at discovering improved reasoning patterns and aligning with human preferences, as well as two SFT stages that serve as the seed for the model's reasoning and non-reasoning capabilities. We believe the pipeline will benefit the industry by creating better models. --- **Distillation: Smaller Models Can Be Powerful Too** - We demonstrate that the reasoning patterns of larger models can be distilled into smaller models, resulting in better performance compared to the reasoning patterns discovered through RL on small models. The open source DeepSeek-R1, as well as its API, will benefit the research community to distill better smaller models in the future. - Using the reasoning data generated by DeepSeek-R1, we fine-tuned several dense models that are widely used in the research community. The evaluation results demonstrate that the distilled smaller dense models perform exceptionally well on benchmarks. We open-source distilled 1.5B, 7B, 8B, 14B, 32B, and 70B checkpoints based on Qwen2.5 and Llama3 series to the community. ## 3. Model Downloads ### DeepSeek-R1 Models <div align="center"> | **Model** | **#Total Params** | **#Activated Params** | **Context Length** | **Download** | | :------------: | :------------: | :------------: | :------------: | :------------: | | DeepSeek-R1-Zero | 671B | 37B | 128K | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Zero) | | DeepSeek-R1 | 671B | 37B | 128K | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1) | </div> DeepSeek-R1-Zero & DeepSeek-R1 are trained based on DeepSeek-V3-Base. For more details regarding the model architecture, please refer to [DeepSeek-V3](https://github.com/deepseek-ai/DeepSeek-V3) repository. ### DeepSeek-R1-Distill Models <div align="center"> | **Model** | **Base Model** | **Download** | | :------------: | :------------: | :------------: | | DeepSeek-R1-Distill-Qwen-1.5B | [Qwen2.5-Math-1.5B](https://huggingface.co/Qwen/Qwen2.5-Math-1.5B) | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B) | | DeepSeek-R1-Distill-Qwen-7B | [Qwen2.5-Math-7B](https://huggingface.co/Qwen/Qwen2.5-Math-7B) | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-7B) | | DeepSeek-R1-Distill-Llama-8B | [Llama-3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B) | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-8B) | | DeepSeek-R1-Distill-Qwen-14B | [Qwen2.5-14B](https://huggingface.co/Qwen/Qwen2.5-14B) | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-14B) | |DeepSeek-R1-Distill-Qwen-32B | [Qwen2.5-32B](https://huggingface.co/Qwen/Qwen2.5-32B) | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-32B) | | DeepSeek-R1-Distill-Llama-70B | [Llama-3.3-70B-Instruct](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct) | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-70B) | </div> DeepSeek-R1-Distill models are fine-tuned based on open-source models, using samples generated by DeepSeek-R1. We slightly change their configs and tokenizers. Please use our setting to run these models. ## 4. Evaluation Results ### DeepSeek-R1-Evaluation For all our models, the maximum generation length is set to 32,768 tokens. For benchmarks requiring sampling, we use a temperature of $0.6$, a top-p value of $0.95$, and generate 64 responses per query to estimate pass@1. <div align="center"> | Category | Benchmark (Metric) | Claude-3.5-Sonnet-1022 | GPT-4o 0513 | DeepSeek V3 | OpenAI o1-mini | OpenAI o1-1217 | DeepSeek R1 | |----------|-------------------|----------------------|------------|--------------|----------------|------------|--------------| | | Architecture | - | - | MoE | - | - | MoE | | | # Activated Params | - | - | 37B | - | - | 37B | | | # Total Params | - | - | 671B | - | - | 671B | | English | MMLU (Pass@1) | 88.3 | 87.2 | 88.5 | 85.2 | **91.8** | 90.8 | | | MMLU-Redux (EM) | 88.9 | 88.0 | 89.1 | 86.7 | - | **92.9** | | | MMLU-Pro (EM) | 78.0 | 72.6 | 75.9 | 80.3 | - | **84.0** | | | DROP (3-shot F1) | 88.3 | 83.7 | 91.6 | 83.9 | 90.2 | **92.2** | | | IF-Eval (Prompt Strict) | **86.5** | 84.3 | 86.1 | 84.8 | - | 83.3 | | | GPQA-Diamond (Pass@1) | 65.0 | 49.9 | 59.1 | 60.0 | **75.7** | 71.5 | | | SimpleQA (Correct) | 28.4 | 38.2 | 24.9 | 7.0 | **47.0** | 30.1 | | | FRAMES (Acc.) | 72.5 | 80.5 | 73.3 | 76.9 | - | **82.5** | | | AlpacaEval2.0 (LC-winrate) | 52.0 | 51.1 | 70.0 | 57.8 | - | **87.6** | | | ArenaHard (GPT-4-1106) | 85.2 | 80.4 | 85.5 | 92.0 | - | **92.3** | | Code | LiveCodeBench (Pass@1-COT) | 33.8 | 34.2 | - | 53.8 | 63.4 | **65.9** | | | Codeforces (Percentile) | 20.3 | 23.6 | 58.7 | 93.4 | **96.6** | 96.3 | | | Codeforces (Rating) | 717 | 759 | 1134 | 1820 | **2061** | 2029 | | | SWE Verified (Resolved) | **50.8** | 38.8 | 42.0 | 41.6 | 48.9 | 49.2 | | | Aider-Polyglot (Acc.) | 45.3 | 16.0 | 49.6 | 32.9 | **61.7** | 53.3 | | Math | AIME 2024 (Pass@1) | 16.0 | 9.3 | 39.2 | 63.6 | 79.2 | **79.8** | | | MATH-500 (Pass@1) | 78.3 | 74.6 | 90.2 | 90.0 | 96.4 | **97.3** | | | CNMO 2024 (Pass@1) | 13.1 | 10.8 | 43.2 | 67.6 | - | **78.8** | | Chinese | CLUEWSC (EM) | 85.4 | 87.9 | 90.9 | 89.9 | - | **92.8** | | | C-Eval (EM) | 76.7 | 76.0 | 86.5 | 68.9 | - | **91.8** | | | C-SimpleQA (Correct) | 55.4 | 58.7 | **68.0** | 40.3 | - | 63.7 | </div> ### Distilled Model Evaluation <div align="center"> | Model | AIME 2024 pass@1 | AIME 2024 cons@64 | MATH-500 pass@1 | GPQA Diamond pass@1 | LiveCodeBench pass@1 | CodeForces rating | |------------------------------------------|------------------|-------------------|-----------------|----------------------|----------------------|-------------------| | GPT-4o-0513 | 9.3 | 13.4 | 74.6 | 49.9 | 32.9 | 759 | | Claude-3.5-Sonnet-1022 | 16.0 | 26.7 | 78.3 | 65.0 | 38.9 | 717 | | o1-mini | 63.6 | 80.0 | 90.0 | 60.0 | 53.8 | **1820** | | QwQ-32B-Preview | 44.0 | 60.0 | 90.6 | 54.5 | 41.9 | 1316 | | DeepSeek-R1-Distill-Qwen-1.5B | 28.9 | 52.7 | 83.9 | 33.8 | 16.9 | 954 | | DeepSeek-R1-Distill-Qwen-7B | 55.5 | 83.3 | 92.8 | 49.1 | 37.6 | 1189 | | DeepSeek-R1-Distill-Qwen-14B | 69.7 | 80.0 | 93.9 | 59.1 | 53.1 | 1481 | | DeepSeek-R1-Distill-Qwen-32B | **72.6** | 83.3 | 94.3 | 62.1 | 57.2 | 1691 | | DeepSeek-R1-Distill-Llama-8B | 50.4 | 80.0 | 89.1 | 49.0 | 39.6 | 1205 | | DeepSeek-R1-Distill-Llama-70B | 70.0 | **86.7** | **94.5** | **65.2** | **57.5** | 1633 | </div> ## 5. Chat Website & API Platform You can chat with DeepSeek-R1 on DeepSeek's official website: [chat.deepseek.com](https://chat.deepseek.com), and switch on the button "DeepThink" We also provide OpenAI-Compatible API at DeepSeek Platform: [platform.deepseek.com](https://platform.deepseek.com/) ## 6. How to Run Locally ### DeepSeek-R1 Models Please visit [DeepSeek-V3](https://github.com/deepseek-ai/DeepSeek-V3) repo for more information about running DeepSeek-R1 locally. **NOTE: Hugging Face's Transformers has not been directly supported yet.** ### DeepSeek-R1-Distill Models DeepSeek-R1-Distill models can be utilized in the same manner as Qwen or Llama models. For instance, you can easily start a service using [vLLM](https://github.com/vllm-project/vllm): ```shell vllm serve deepseek-ai/DeepSeek-R1-Distill-Qwen-32B --tensor-parallel-size 2 --max-model-len 32768 --enforce-eager ``` You can also easily start a service using [SGLang](https://github.com/sgl-project/sglang) ```bash python3 -m sglang.launch_server --model deepseek-ai/DeepSeek-R1-Distill-Qwen-32B --trust-remote-code --tp 2 ``` ### Usage Recommendations **We recommend adhering to the following configurations when utilizing the DeepSeek-R1 series models, including benchmarking, to achieve the expected performance:** 1. Set the temperature within the range of 0.5-0.7 (0.6 is recommended) to prevent endless repetitions or incoherent outputs. 2. **Avoid adding a system prompt; all instructions should be contained within the user prompt.** 3. For mathematical problems, it is advisable to include a directive in your prompt such as: "Please reason step by step, and put your final answer within \boxed{}." 4. When evaluating model performance, it is recommended to conduct multiple tests and average the results. Additionally, we have observed that the DeepSeek-R1 series models tend to bypass thinking pattern (i.e., outputting "\<think\>\n\n\</think\>") when responding to certain queries, which can adversely affect the model's performance. **To ensure that the model engages in thorough reasoning, we recommend enforcing the model to initiate its response with "\<think\>\n" at the beginning of every output.** ## 7. License This code repository and the model weights are licensed under the [MIT License](https://github.com/deepseek-ai/DeepSeek-R1/blob/main/LICENSE). DeepSeek-R1 series support commercial use, allow for any modifications and derivative works, including, but not limited to, distillation for training other LLMs. Please note that: - DeepSeek-R1-Distill-Qwen-1.5B, DeepSeek-R1-Distill-Qwen-7B, DeepSeek-R1-Distill-Qwen-14B and DeepSeek-R1-Distill-Qwen-32B are derived from [Qwen-2.5 series](https://github.com/QwenLM/Qwen2.5), which are originally licensed under [Apache 2.0 License](https://huggingface.co/Qwen/Qwen2.5-1.5B/blob/main/LICENSE), and now finetuned with 800k samples curated with DeepSeek-R1. - DeepSeek-R1-Distill-Llama-8B is derived from Llama3.1-8B-Base and is originally licensed under [llama3.1 license](https://huggingface.co/meta-llama/Llama-3.1-8B/blob/main/LICENSE). - DeepSeek-R1-Distill-Llama-70B is derived from Llama3.3-70B-Instruct and is originally licensed under [llama3.3 license](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct/blob/main/LICENSE). ## 8. Citation ``` @misc{deepseekai2025deepseekr1incentivizingreasoningcapability, title={DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning}, author={DeepSeek-AI}, year={2025}, eprint={2501.12948}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2501.12948}, } ``` ## 9. Contact If you have any questions, please raise an issue or contact us at [[email protected]]([email protected]).
the-jb/phi-1_5-tofu_retain90
the-jb
2025-05-30T08:12:53Z
130
0
null
[ "safetensors", "phi", "dataset:locuslab/TOFU", "base_model:microsoft/phi-1_5", "base_model:finetune:microsoft/phi-1_5", "license:mit", "region:us" ]
null
2025-04-15T13:05:31Z
--- license: mit datasets: - locuslab/TOFU base_model: - microsoft/phi-1_5 --- ## Model Summary This model is a fine-tuned version of [microsoft/phi-1_5](https://huggingface.co/microsoft/phi-1_5) on the `retain90` split of the [locuslab/TOFU](https://huggingface.co/datasets/locuslab/TOFU) dataset, following the setup from [locuslab/tofu](https://github.com/locuslab/tofu). This release includes the tokenizer files. ## License This model is licensed under the [MIT License](https://opensource.org/licenses/MIT), inherited from the base model.
gigipalsu/gemma-3-12b-it-Q4_K_M-GGUF
gigipalsu
2025-05-30T08:12:42Z
0
0
transformers
[ "transformers", "gguf", "llama-cpp", "gguf-my-repo", "image-text-to-text", "base_model:unsloth/gemma-3-12b-it", "base_model:quantized:unsloth/gemma-3-12b-it", "license:gemma", "endpoints_compatible", "region:us", "conversational" ]
image-text-to-text
2025-05-30T08:11:52Z
--- license: gemma library_name: transformers pipeline_tag: image-text-to-text extra_gated_heading: Access Gemma on Hugging Face extra_gated_prompt: To access Gemma on Hugging Face, you’re required to review and agree to Google’s usage license. To do this, please ensure you’re logged in to Hugging Face and click below. Requests are processed immediately. extra_gated_button_content: Acknowledge license base_model: unsloth/gemma-3-12b-it tags: - llama-cpp - gguf-my-repo --- # gigipalsu/gemma-3-12b-it-Q4_K_M-GGUF This model was converted to GGUF format from [`unsloth/gemma-3-12b-it`](https://huggingface.co/unsloth/gemma-3-12b-it) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/unsloth/gemma-3-12b-it) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo gigipalsu/gemma-3-12b-it-Q4_K_M-GGUF --hf-file gemma-3-12b-it-q4_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo gigipalsu/gemma-3-12b-it-Q4_K_M-GGUF --hf-file gemma-3-12b-it-q4_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo gigipalsu/gemma-3-12b-it-Q4_K_M-GGUF --hf-file gemma-3-12b-it-q4_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo gigipalsu/gemma-3-12b-it-Q4_K_M-GGUF --hf-file gemma-3-12b-it-q4_k_m.gguf -c 2048 ```
Jackmin108/qwen-7b-rl-step-4
Jackmin108
2025-05-30T08:12:41Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "arxiv:2501.12948", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-30T05:03:37Z
--- license: mit library_name: transformers --- # DeepSeek-R1 <!-- markdownlint-disable first-line-h1 --> <!-- markdownlint-disable html --> <!-- markdownlint-disable no-duplicate-header --> <div align="center"> <img src="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/logo.svg?raw=true" width="60%" alt="DeepSeek-V3" /> </div> <hr> <div align="center" style="line-height: 1;"> <a href="https://www.deepseek.com/" target="_blank" style="margin: 2px;"> <img alt="Homepage" src="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/badge.svg?raw=true" style="display: inline-block; vertical-align: middle;"/> </a> <a href="https://chat.deepseek.com/" target="_blank" style="margin: 2px;"> <img alt="Chat" src="https://img.shields.io/badge/🤖%20Chat-DeepSeek%20R1-536af5?color=536af5&logoColor=white" style="display: inline-block; vertical-align: middle;"/> </a> <a href="https://huggingface.co/deepseek-ai" target="_blank" style="margin: 2px;"> <img alt="Hugging Face" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-DeepSeek%20AI-ffc107?color=ffc107&logoColor=white" style="display: inline-block; vertical-align: middle;"/> </a> </div> <div align="center" style="line-height: 1;"> <a href="https://discord.gg/Tc7c45Zzu5" target="_blank" style="margin: 2px;"> <img alt="Discord" src="https://img.shields.io/badge/Discord-DeepSeek%20AI-7289da?logo=discord&logoColor=white&color=7289da" style="display: inline-block; vertical-align: middle;"/> </a> <a href="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/qr.jpeg?raw=true" target="_blank" style="margin: 2px;"> <img alt="Wechat" src="https://img.shields.io/badge/WeChat-DeepSeek%20AI-brightgreen?logo=wechat&logoColor=white" style="display: inline-block; vertical-align: middle;"/> </a> <a href="https://twitter.com/deepseek_ai" target="_blank" style="margin: 2px;"> <img alt="Twitter Follow" src="https://img.shields.io/badge/Twitter-deepseek_ai-white?logo=x&logoColor=white" style="display: inline-block; vertical-align: middle;"/> </a> </div> <div align="center" style="line-height: 1;"> <a href="https://github.com/deepseek-ai/DeepSeek-R1/blob/main/LICENSE" style="margin: 2px;"> <img alt="License" src="https://img.shields.io/badge/License-MIT-f5de53?&color=f5de53" style="display: inline-block; vertical-align: middle;"/> </a> </div> <p align="center"> <a href="https://github.com/deepseek-ai/DeepSeek-R1/blob/main/DeepSeek_R1.pdf"><b>Paper Link</b>👁️</a> </p> ## 1. Introduction We introduce our first-generation reasoning models, DeepSeek-R1-Zero and DeepSeek-R1. DeepSeek-R1-Zero, a model trained via large-scale reinforcement learning (RL) without supervised fine-tuning (SFT) as a preliminary step, demonstrated remarkable performance on reasoning. With RL, DeepSeek-R1-Zero naturally emerged with numerous powerful and interesting reasoning behaviors. However, DeepSeek-R1-Zero encounters challenges such as endless repetition, poor readability, and language mixing. To address these issues and further enhance reasoning performance, we introduce DeepSeek-R1, which incorporates cold-start data before RL. DeepSeek-R1 achieves performance comparable to OpenAI-o1 across math, code, and reasoning tasks. To support the research community, we have open-sourced DeepSeek-R1-Zero, DeepSeek-R1, and six dense models distilled from DeepSeek-R1 based on Llama and Qwen. DeepSeek-R1-Distill-Qwen-32B outperforms OpenAI-o1-mini across various benchmarks, achieving new state-of-the-art results for dense models. **NOTE: Before running DeepSeek-R1 series models locally, we kindly recommend reviewing the [Usage Recommendation](#usage-recommendations) section.** <p align="center"> <img width="80%" src="figures/benchmark.jpg"> </p> ## 2. Model Summary --- **Post-Training: Large-Scale Reinforcement Learning on the Base Model** - We directly apply reinforcement learning (RL) to the base model without relying on supervised fine-tuning (SFT) as a preliminary step. This approach allows the model to explore chain-of-thought (CoT) for solving complex problems, resulting in the development of DeepSeek-R1-Zero. DeepSeek-R1-Zero demonstrates capabilities such as self-verification, reflection, and generating long CoTs, marking a significant milestone for the research community. Notably, it is the first open research to validate that reasoning capabilities of LLMs can be incentivized purely through RL, without the need for SFT. This breakthrough paves the way for future advancements in this area. - We introduce our pipeline to develop DeepSeek-R1. The pipeline incorporates two RL stages aimed at discovering improved reasoning patterns and aligning with human preferences, as well as two SFT stages that serve as the seed for the model's reasoning and non-reasoning capabilities. We believe the pipeline will benefit the industry by creating better models. --- **Distillation: Smaller Models Can Be Powerful Too** - We demonstrate that the reasoning patterns of larger models can be distilled into smaller models, resulting in better performance compared to the reasoning patterns discovered through RL on small models. The open source DeepSeek-R1, as well as its API, will benefit the research community to distill better smaller models in the future. - Using the reasoning data generated by DeepSeek-R1, we fine-tuned several dense models that are widely used in the research community. The evaluation results demonstrate that the distilled smaller dense models perform exceptionally well on benchmarks. We open-source distilled 1.5B, 7B, 8B, 14B, 32B, and 70B checkpoints based on Qwen2.5 and Llama3 series to the community. ## 3. Model Downloads ### DeepSeek-R1 Models <div align="center"> | **Model** | **#Total Params** | **#Activated Params** | **Context Length** | **Download** | | :------------: | :------------: | :------------: | :------------: | :------------: | | DeepSeek-R1-Zero | 671B | 37B | 128K | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Zero) | | DeepSeek-R1 | 671B | 37B | 128K | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1) | </div> DeepSeek-R1-Zero & DeepSeek-R1 are trained based on DeepSeek-V3-Base. For more details regarding the model architecture, please refer to [DeepSeek-V3](https://github.com/deepseek-ai/DeepSeek-V3) repository. ### DeepSeek-R1-Distill Models <div align="center"> | **Model** | **Base Model** | **Download** | | :------------: | :------------: | :------------: | | DeepSeek-R1-Distill-Qwen-1.5B | [Qwen2.5-Math-1.5B](https://huggingface.co/Qwen/Qwen2.5-Math-1.5B) | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B) | | DeepSeek-R1-Distill-Qwen-7B | [Qwen2.5-Math-7B](https://huggingface.co/Qwen/Qwen2.5-Math-7B) | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-7B) | | DeepSeek-R1-Distill-Llama-8B | [Llama-3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B) | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-8B) | | DeepSeek-R1-Distill-Qwen-14B | [Qwen2.5-14B](https://huggingface.co/Qwen/Qwen2.5-14B) | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-14B) | |DeepSeek-R1-Distill-Qwen-32B | [Qwen2.5-32B](https://huggingface.co/Qwen/Qwen2.5-32B) | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-32B) | | DeepSeek-R1-Distill-Llama-70B | [Llama-3.3-70B-Instruct](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct) | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-70B) | </div> DeepSeek-R1-Distill models are fine-tuned based on open-source models, using samples generated by DeepSeek-R1. We slightly change their configs and tokenizers. Please use our setting to run these models. ## 4. Evaluation Results ### DeepSeek-R1-Evaluation For all our models, the maximum generation length is set to 32,768 tokens. For benchmarks requiring sampling, we use a temperature of $0.6$, a top-p value of $0.95$, and generate 64 responses per query to estimate pass@1. <div align="center"> | Category | Benchmark (Metric) | Claude-3.5-Sonnet-1022 | GPT-4o 0513 | DeepSeek V3 | OpenAI o1-mini | OpenAI o1-1217 | DeepSeek R1 | |----------|-------------------|----------------------|------------|--------------|----------------|------------|--------------| | | Architecture | - | - | MoE | - | - | MoE | | | # Activated Params | - | - | 37B | - | - | 37B | | | # Total Params | - | - | 671B | - | - | 671B | | English | MMLU (Pass@1) | 88.3 | 87.2 | 88.5 | 85.2 | **91.8** | 90.8 | | | MMLU-Redux (EM) | 88.9 | 88.0 | 89.1 | 86.7 | - | **92.9** | | | MMLU-Pro (EM) | 78.0 | 72.6 | 75.9 | 80.3 | - | **84.0** | | | DROP (3-shot F1) | 88.3 | 83.7 | 91.6 | 83.9 | 90.2 | **92.2** | | | IF-Eval (Prompt Strict) | **86.5** | 84.3 | 86.1 | 84.8 | - | 83.3 | | | GPQA-Diamond (Pass@1) | 65.0 | 49.9 | 59.1 | 60.0 | **75.7** | 71.5 | | | SimpleQA (Correct) | 28.4 | 38.2 | 24.9 | 7.0 | **47.0** | 30.1 | | | FRAMES (Acc.) | 72.5 | 80.5 | 73.3 | 76.9 | - | **82.5** | | | AlpacaEval2.0 (LC-winrate) | 52.0 | 51.1 | 70.0 | 57.8 | - | **87.6** | | | ArenaHard (GPT-4-1106) | 85.2 | 80.4 | 85.5 | 92.0 | - | **92.3** | | Code | LiveCodeBench (Pass@1-COT) | 33.8 | 34.2 | - | 53.8 | 63.4 | **65.9** | | | Codeforces (Percentile) | 20.3 | 23.6 | 58.7 | 93.4 | **96.6** | 96.3 | | | Codeforces (Rating) | 717 | 759 | 1134 | 1820 | **2061** | 2029 | | | SWE Verified (Resolved) | **50.8** | 38.8 | 42.0 | 41.6 | 48.9 | 49.2 | | | Aider-Polyglot (Acc.) | 45.3 | 16.0 | 49.6 | 32.9 | **61.7** | 53.3 | | Math | AIME 2024 (Pass@1) | 16.0 | 9.3 | 39.2 | 63.6 | 79.2 | **79.8** | | | MATH-500 (Pass@1) | 78.3 | 74.6 | 90.2 | 90.0 | 96.4 | **97.3** | | | CNMO 2024 (Pass@1) | 13.1 | 10.8 | 43.2 | 67.6 | - | **78.8** | | Chinese | CLUEWSC (EM) | 85.4 | 87.9 | 90.9 | 89.9 | - | **92.8** | | | C-Eval (EM) | 76.7 | 76.0 | 86.5 | 68.9 | - | **91.8** | | | C-SimpleQA (Correct) | 55.4 | 58.7 | **68.0** | 40.3 | - | 63.7 | </div> ### Distilled Model Evaluation <div align="center"> | Model | AIME 2024 pass@1 | AIME 2024 cons@64 | MATH-500 pass@1 | GPQA Diamond pass@1 | LiveCodeBench pass@1 | CodeForces rating | |------------------------------------------|------------------|-------------------|-----------------|----------------------|----------------------|-------------------| | GPT-4o-0513 | 9.3 | 13.4 | 74.6 | 49.9 | 32.9 | 759 | | Claude-3.5-Sonnet-1022 | 16.0 | 26.7 | 78.3 | 65.0 | 38.9 | 717 | | o1-mini | 63.6 | 80.0 | 90.0 | 60.0 | 53.8 | **1820** | | QwQ-32B-Preview | 44.0 | 60.0 | 90.6 | 54.5 | 41.9 | 1316 | | DeepSeek-R1-Distill-Qwen-1.5B | 28.9 | 52.7 | 83.9 | 33.8 | 16.9 | 954 | | DeepSeek-R1-Distill-Qwen-7B | 55.5 | 83.3 | 92.8 | 49.1 | 37.6 | 1189 | | DeepSeek-R1-Distill-Qwen-14B | 69.7 | 80.0 | 93.9 | 59.1 | 53.1 | 1481 | | DeepSeek-R1-Distill-Qwen-32B | **72.6** | 83.3 | 94.3 | 62.1 | 57.2 | 1691 | | DeepSeek-R1-Distill-Llama-8B | 50.4 | 80.0 | 89.1 | 49.0 | 39.6 | 1205 | | DeepSeek-R1-Distill-Llama-70B | 70.0 | **86.7** | **94.5** | **65.2** | **57.5** | 1633 | </div> ## 5. Chat Website & API Platform You can chat with DeepSeek-R1 on DeepSeek's official website: [chat.deepseek.com](https://chat.deepseek.com), and switch on the button "DeepThink" We also provide OpenAI-Compatible API at DeepSeek Platform: [platform.deepseek.com](https://platform.deepseek.com/) ## 6. How to Run Locally ### DeepSeek-R1 Models Please visit [DeepSeek-V3](https://github.com/deepseek-ai/DeepSeek-V3) repo for more information about running DeepSeek-R1 locally. **NOTE: Hugging Face's Transformers has not been directly supported yet.** ### DeepSeek-R1-Distill Models DeepSeek-R1-Distill models can be utilized in the same manner as Qwen or Llama models. For instance, you can easily start a service using [vLLM](https://github.com/vllm-project/vllm): ```shell vllm serve deepseek-ai/DeepSeek-R1-Distill-Qwen-32B --tensor-parallel-size 2 --max-model-len 32768 --enforce-eager ``` You can also easily start a service using [SGLang](https://github.com/sgl-project/sglang) ```bash python3 -m sglang.launch_server --model deepseek-ai/DeepSeek-R1-Distill-Qwen-32B --trust-remote-code --tp 2 ``` ### Usage Recommendations **We recommend adhering to the following configurations when utilizing the DeepSeek-R1 series models, including benchmarking, to achieve the expected performance:** 1. Set the temperature within the range of 0.5-0.7 (0.6 is recommended) to prevent endless repetitions or incoherent outputs. 2. **Avoid adding a system prompt; all instructions should be contained within the user prompt.** 3. For mathematical problems, it is advisable to include a directive in your prompt such as: "Please reason step by step, and put your final answer within \boxed{}." 4. When evaluating model performance, it is recommended to conduct multiple tests and average the results. Additionally, we have observed that the DeepSeek-R1 series models tend to bypass thinking pattern (i.e., outputting "\<think\>\n\n\</think\>") when responding to certain queries, which can adversely affect the model's performance. **To ensure that the model engages in thorough reasoning, we recommend enforcing the model to initiate its response with "\<think\>\n" at the beginning of every output.** ## 7. License This code repository and the model weights are licensed under the [MIT License](https://github.com/deepseek-ai/DeepSeek-R1/blob/main/LICENSE). DeepSeek-R1 series support commercial use, allow for any modifications and derivative works, including, but not limited to, distillation for training other LLMs. Please note that: - DeepSeek-R1-Distill-Qwen-1.5B, DeepSeek-R1-Distill-Qwen-7B, DeepSeek-R1-Distill-Qwen-14B and DeepSeek-R1-Distill-Qwen-32B are derived from [Qwen-2.5 series](https://github.com/QwenLM/Qwen2.5), which are originally licensed under [Apache 2.0 License](https://huggingface.co/Qwen/Qwen2.5-1.5B/blob/main/LICENSE), and now finetuned with 800k samples curated with DeepSeek-R1. - DeepSeek-R1-Distill-Llama-8B is derived from Llama3.1-8B-Base and is originally licensed under [llama3.1 license](https://huggingface.co/meta-llama/Llama-3.1-8B/blob/main/LICENSE). - DeepSeek-R1-Distill-Llama-70B is derived from Llama3.3-70B-Instruct and is originally licensed under [llama3.3 license](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct/blob/main/LICENSE). ## 8. Citation ``` @misc{deepseekai2025deepseekr1incentivizingreasoningcapability, title={DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning}, author={DeepSeek-AI}, year={2025}, eprint={2501.12948}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2501.12948}, } ``` ## 9. Contact If you have any questions, please raise an issue or contact us at [[email protected]]([email protected]).
Jackmin108/qwen-7b-rl-step-3
Jackmin108
2025-05-30T08:12:31Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "arxiv:2501.12948", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-30T05:03:21Z
--- license: mit library_name: transformers --- # DeepSeek-R1 <!-- markdownlint-disable first-line-h1 --> <!-- markdownlint-disable html --> <!-- markdownlint-disable no-duplicate-header --> <div align="center"> <img src="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/logo.svg?raw=true" width="60%" alt="DeepSeek-V3" /> </div> <hr> <div align="center" style="line-height: 1;"> <a href="https://www.deepseek.com/" target="_blank" style="margin: 2px;"> <img alt="Homepage" src="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/badge.svg?raw=true" style="display: inline-block; vertical-align: middle;"/> </a> <a href="https://chat.deepseek.com/" target="_blank" style="margin: 2px;"> <img alt="Chat" src="https://img.shields.io/badge/🤖%20Chat-DeepSeek%20R1-536af5?color=536af5&logoColor=white" style="display: inline-block; vertical-align: middle;"/> </a> <a href="https://huggingface.co/deepseek-ai" target="_blank" style="margin: 2px;"> <img alt="Hugging Face" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-DeepSeek%20AI-ffc107?color=ffc107&logoColor=white" style="display: inline-block; vertical-align: middle;"/> </a> </div> <div align="center" style="line-height: 1;"> <a href="https://discord.gg/Tc7c45Zzu5" target="_blank" style="margin: 2px;"> <img alt="Discord" src="https://img.shields.io/badge/Discord-DeepSeek%20AI-7289da?logo=discord&logoColor=white&color=7289da" style="display: inline-block; vertical-align: middle;"/> </a> <a href="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/qr.jpeg?raw=true" target="_blank" style="margin: 2px;"> <img alt="Wechat" src="https://img.shields.io/badge/WeChat-DeepSeek%20AI-brightgreen?logo=wechat&logoColor=white" style="display: inline-block; vertical-align: middle;"/> </a> <a href="https://twitter.com/deepseek_ai" target="_blank" style="margin: 2px;"> <img alt="Twitter Follow" src="https://img.shields.io/badge/Twitter-deepseek_ai-white?logo=x&logoColor=white" style="display: inline-block; vertical-align: middle;"/> </a> </div> <div align="center" style="line-height: 1;"> <a href="https://github.com/deepseek-ai/DeepSeek-R1/blob/main/LICENSE" style="margin: 2px;"> <img alt="License" src="https://img.shields.io/badge/License-MIT-f5de53?&color=f5de53" style="display: inline-block; vertical-align: middle;"/> </a> </div> <p align="center"> <a href="https://github.com/deepseek-ai/DeepSeek-R1/blob/main/DeepSeek_R1.pdf"><b>Paper Link</b>👁️</a> </p> ## 1. Introduction We introduce our first-generation reasoning models, DeepSeek-R1-Zero and DeepSeek-R1. DeepSeek-R1-Zero, a model trained via large-scale reinforcement learning (RL) without supervised fine-tuning (SFT) as a preliminary step, demonstrated remarkable performance on reasoning. With RL, DeepSeek-R1-Zero naturally emerged with numerous powerful and interesting reasoning behaviors. However, DeepSeek-R1-Zero encounters challenges such as endless repetition, poor readability, and language mixing. To address these issues and further enhance reasoning performance, we introduce DeepSeek-R1, which incorporates cold-start data before RL. DeepSeek-R1 achieves performance comparable to OpenAI-o1 across math, code, and reasoning tasks. To support the research community, we have open-sourced DeepSeek-R1-Zero, DeepSeek-R1, and six dense models distilled from DeepSeek-R1 based on Llama and Qwen. DeepSeek-R1-Distill-Qwen-32B outperforms OpenAI-o1-mini across various benchmarks, achieving new state-of-the-art results for dense models. **NOTE: Before running DeepSeek-R1 series models locally, we kindly recommend reviewing the [Usage Recommendation](#usage-recommendations) section.** <p align="center"> <img width="80%" src="figures/benchmark.jpg"> </p> ## 2. Model Summary --- **Post-Training: Large-Scale Reinforcement Learning on the Base Model** - We directly apply reinforcement learning (RL) to the base model without relying on supervised fine-tuning (SFT) as a preliminary step. This approach allows the model to explore chain-of-thought (CoT) for solving complex problems, resulting in the development of DeepSeek-R1-Zero. DeepSeek-R1-Zero demonstrates capabilities such as self-verification, reflection, and generating long CoTs, marking a significant milestone for the research community. Notably, it is the first open research to validate that reasoning capabilities of LLMs can be incentivized purely through RL, without the need for SFT. This breakthrough paves the way for future advancements in this area. - We introduce our pipeline to develop DeepSeek-R1. The pipeline incorporates two RL stages aimed at discovering improved reasoning patterns and aligning with human preferences, as well as two SFT stages that serve as the seed for the model's reasoning and non-reasoning capabilities. We believe the pipeline will benefit the industry by creating better models. --- **Distillation: Smaller Models Can Be Powerful Too** - We demonstrate that the reasoning patterns of larger models can be distilled into smaller models, resulting in better performance compared to the reasoning patterns discovered through RL on small models. The open source DeepSeek-R1, as well as its API, will benefit the research community to distill better smaller models in the future. - Using the reasoning data generated by DeepSeek-R1, we fine-tuned several dense models that are widely used in the research community. The evaluation results demonstrate that the distilled smaller dense models perform exceptionally well on benchmarks. We open-source distilled 1.5B, 7B, 8B, 14B, 32B, and 70B checkpoints based on Qwen2.5 and Llama3 series to the community. ## 3. Model Downloads ### DeepSeek-R1 Models <div align="center"> | **Model** | **#Total Params** | **#Activated Params** | **Context Length** | **Download** | | :------------: | :------------: | :------------: | :------------: | :------------: | | DeepSeek-R1-Zero | 671B | 37B | 128K | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Zero) | | DeepSeek-R1 | 671B | 37B | 128K | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1) | </div> DeepSeek-R1-Zero & DeepSeek-R1 are trained based on DeepSeek-V3-Base. For more details regarding the model architecture, please refer to [DeepSeek-V3](https://github.com/deepseek-ai/DeepSeek-V3) repository. ### DeepSeek-R1-Distill Models <div align="center"> | **Model** | **Base Model** | **Download** | | :------------: | :------------: | :------------: | | DeepSeek-R1-Distill-Qwen-1.5B | [Qwen2.5-Math-1.5B](https://huggingface.co/Qwen/Qwen2.5-Math-1.5B) | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B) | | DeepSeek-R1-Distill-Qwen-7B | [Qwen2.5-Math-7B](https://huggingface.co/Qwen/Qwen2.5-Math-7B) | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-7B) | | DeepSeek-R1-Distill-Llama-8B | [Llama-3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B) | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-8B) | | DeepSeek-R1-Distill-Qwen-14B | [Qwen2.5-14B](https://huggingface.co/Qwen/Qwen2.5-14B) | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-14B) | |DeepSeek-R1-Distill-Qwen-32B | [Qwen2.5-32B](https://huggingface.co/Qwen/Qwen2.5-32B) | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-32B) | | DeepSeek-R1-Distill-Llama-70B | [Llama-3.3-70B-Instruct](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct) | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-70B) | </div> DeepSeek-R1-Distill models are fine-tuned based on open-source models, using samples generated by DeepSeek-R1. We slightly change their configs and tokenizers. Please use our setting to run these models. ## 4. Evaluation Results ### DeepSeek-R1-Evaluation For all our models, the maximum generation length is set to 32,768 tokens. For benchmarks requiring sampling, we use a temperature of $0.6$, a top-p value of $0.95$, and generate 64 responses per query to estimate pass@1. <div align="center"> | Category | Benchmark (Metric) | Claude-3.5-Sonnet-1022 | GPT-4o 0513 | DeepSeek V3 | OpenAI o1-mini | OpenAI o1-1217 | DeepSeek R1 | |----------|-------------------|----------------------|------------|--------------|----------------|------------|--------------| | | Architecture | - | - | MoE | - | - | MoE | | | # Activated Params | - | - | 37B | - | - | 37B | | | # Total Params | - | - | 671B | - | - | 671B | | English | MMLU (Pass@1) | 88.3 | 87.2 | 88.5 | 85.2 | **91.8** | 90.8 | | | MMLU-Redux (EM) | 88.9 | 88.0 | 89.1 | 86.7 | - | **92.9** | | | MMLU-Pro (EM) | 78.0 | 72.6 | 75.9 | 80.3 | - | **84.0** | | | DROP (3-shot F1) | 88.3 | 83.7 | 91.6 | 83.9 | 90.2 | **92.2** | | | IF-Eval (Prompt Strict) | **86.5** | 84.3 | 86.1 | 84.8 | - | 83.3 | | | GPQA-Diamond (Pass@1) | 65.0 | 49.9 | 59.1 | 60.0 | **75.7** | 71.5 | | | SimpleQA (Correct) | 28.4 | 38.2 | 24.9 | 7.0 | **47.0** | 30.1 | | | FRAMES (Acc.) | 72.5 | 80.5 | 73.3 | 76.9 | - | **82.5** | | | AlpacaEval2.0 (LC-winrate) | 52.0 | 51.1 | 70.0 | 57.8 | - | **87.6** | | | ArenaHard (GPT-4-1106) | 85.2 | 80.4 | 85.5 | 92.0 | - | **92.3** | | Code | LiveCodeBench (Pass@1-COT) | 33.8 | 34.2 | - | 53.8 | 63.4 | **65.9** | | | Codeforces (Percentile) | 20.3 | 23.6 | 58.7 | 93.4 | **96.6** | 96.3 | | | Codeforces (Rating) | 717 | 759 | 1134 | 1820 | **2061** | 2029 | | | SWE Verified (Resolved) | **50.8** | 38.8 | 42.0 | 41.6 | 48.9 | 49.2 | | | Aider-Polyglot (Acc.) | 45.3 | 16.0 | 49.6 | 32.9 | **61.7** | 53.3 | | Math | AIME 2024 (Pass@1) | 16.0 | 9.3 | 39.2 | 63.6 | 79.2 | **79.8** | | | MATH-500 (Pass@1) | 78.3 | 74.6 | 90.2 | 90.0 | 96.4 | **97.3** | | | CNMO 2024 (Pass@1) | 13.1 | 10.8 | 43.2 | 67.6 | - | **78.8** | | Chinese | CLUEWSC (EM) | 85.4 | 87.9 | 90.9 | 89.9 | - | **92.8** | | | C-Eval (EM) | 76.7 | 76.0 | 86.5 | 68.9 | - | **91.8** | | | C-SimpleQA (Correct) | 55.4 | 58.7 | **68.0** | 40.3 | - | 63.7 | </div> ### Distilled Model Evaluation <div align="center"> | Model | AIME 2024 pass@1 | AIME 2024 cons@64 | MATH-500 pass@1 | GPQA Diamond pass@1 | LiveCodeBench pass@1 | CodeForces rating | |------------------------------------------|------------------|-------------------|-----------------|----------------------|----------------------|-------------------| | GPT-4o-0513 | 9.3 | 13.4 | 74.6 | 49.9 | 32.9 | 759 | | Claude-3.5-Sonnet-1022 | 16.0 | 26.7 | 78.3 | 65.0 | 38.9 | 717 | | o1-mini | 63.6 | 80.0 | 90.0 | 60.0 | 53.8 | **1820** | | QwQ-32B-Preview | 44.0 | 60.0 | 90.6 | 54.5 | 41.9 | 1316 | | DeepSeek-R1-Distill-Qwen-1.5B | 28.9 | 52.7 | 83.9 | 33.8 | 16.9 | 954 | | DeepSeek-R1-Distill-Qwen-7B | 55.5 | 83.3 | 92.8 | 49.1 | 37.6 | 1189 | | DeepSeek-R1-Distill-Qwen-14B | 69.7 | 80.0 | 93.9 | 59.1 | 53.1 | 1481 | | DeepSeek-R1-Distill-Qwen-32B | **72.6** | 83.3 | 94.3 | 62.1 | 57.2 | 1691 | | DeepSeek-R1-Distill-Llama-8B | 50.4 | 80.0 | 89.1 | 49.0 | 39.6 | 1205 | | DeepSeek-R1-Distill-Llama-70B | 70.0 | **86.7** | **94.5** | **65.2** | **57.5** | 1633 | </div> ## 5. Chat Website & API Platform You can chat with DeepSeek-R1 on DeepSeek's official website: [chat.deepseek.com](https://chat.deepseek.com), and switch on the button "DeepThink" We also provide OpenAI-Compatible API at DeepSeek Platform: [platform.deepseek.com](https://platform.deepseek.com/) ## 6. How to Run Locally ### DeepSeek-R1 Models Please visit [DeepSeek-V3](https://github.com/deepseek-ai/DeepSeek-V3) repo for more information about running DeepSeek-R1 locally. **NOTE: Hugging Face's Transformers has not been directly supported yet.** ### DeepSeek-R1-Distill Models DeepSeek-R1-Distill models can be utilized in the same manner as Qwen or Llama models. For instance, you can easily start a service using [vLLM](https://github.com/vllm-project/vllm): ```shell vllm serve deepseek-ai/DeepSeek-R1-Distill-Qwen-32B --tensor-parallel-size 2 --max-model-len 32768 --enforce-eager ``` You can also easily start a service using [SGLang](https://github.com/sgl-project/sglang) ```bash python3 -m sglang.launch_server --model deepseek-ai/DeepSeek-R1-Distill-Qwen-32B --trust-remote-code --tp 2 ``` ### Usage Recommendations **We recommend adhering to the following configurations when utilizing the DeepSeek-R1 series models, including benchmarking, to achieve the expected performance:** 1. Set the temperature within the range of 0.5-0.7 (0.6 is recommended) to prevent endless repetitions or incoherent outputs. 2. **Avoid adding a system prompt; all instructions should be contained within the user prompt.** 3. For mathematical problems, it is advisable to include a directive in your prompt such as: "Please reason step by step, and put your final answer within \boxed{}." 4. When evaluating model performance, it is recommended to conduct multiple tests and average the results. Additionally, we have observed that the DeepSeek-R1 series models tend to bypass thinking pattern (i.e., outputting "\<think\>\n\n\</think\>") when responding to certain queries, which can adversely affect the model's performance. **To ensure that the model engages in thorough reasoning, we recommend enforcing the model to initiate its response with "\<think\>\n" at the beginning of every output.** ## 7. License This code repository and the model weights are licensed under the [MIT License](https://github.com/deepseek-ai/DeepSeek-R1/blob/main/LICENSE). DeepSeek-R1 series support commercial use, allow for any modifications and derivative works, including, but not limited to, distillation for training other LLMs. Please note that: - DeepSeek-R1-Distill-Qwen-1.5B, DeepSeek-R1-Distill-Qwen-7B, DeepSeek-R1-Distill-Qwen-14B and DeepSeek-R1-Distill-Qwen-32B are derived from [Qwen-2.5 series](https://github.com/QwenLM/Qwen2.5), which are originally licensed under [Apache 2.0 License](https://huggingface.co/Qwen/Qwen2.5-1.5B/blob/main/LICENSE), and now finetuned with 800k samples curated with DeepSeek-R1. - DeepSeek-R1-Distill-Llama-8B is derived from Llama3.1-8B-Base and is originally licensed under [llama3.1 license](https://huggingface.co/meta-llama/Llama-3.1-8B/blob/main/LICENSE). - DeepSeek-R1-Distill-Llama-70B is derived from Llama3.3-70B-Instruct and is originally licensed under [llama3.3 license](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct/blob/main/LICENSE). ## 8. Citation ``` @misc{deepseekai2025deepseekr1incentivizingreasoningcapability, title={DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning}, author={DeepSeek-AI}, year={2025}, eprint={2501.12948}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2501.12948}, } ``` ## 9. Contact If you have any questions, please raise an issue or contact us at [[email protected]]([email protected]).
Jackmin108/qwen-7b-rl-step-2
Jackmin108
2025-05-30T08:12:28Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "arxiv:2501.12948", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-30T05:03:15Z
--- license: mit library_name: transformers --- # DeepSeek-R1 <!-- markdownlint-disable first-line-h1 --> <!-- markdownlint-disable html --> <!-- markdownlint-disable no-duplicate-header --> <div align="center"> <img src="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/logo.svg?raw=true" width="60%" alt="DeepSeek-V3" /> </div> <hr> <div align="center" style="line-height: 1;"> <a href="https://www.deepseek.com/" target="_blank" style="margin: 2px;"> <img alt="Homepage" src="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/badge.svg?raw=true" style="display: inline-block; vertical-align: middle;"/> </a> <a href="https://chat.deepseek.com/" target="_blank" style="margin: 2px;"> <img alt="Chat" src="https://img.shields.io/badge/🤖%20Chat-DeepSeek%20R1-536af5?color=536af5&logoColor=white" style="display: inline-block; vertical-align: middle;"/> </a> <a href="https://huggingface.co/deepseek-ai" target="_blank" style="margin: 2px;"> <img alt="Hugging Face" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-DeepSeek%20AI-ffc107?color=ffc107&logoColor=white" style="display: inline-block; vertical-align: middle;"/> </a> </div> <div align="center" style="line-height: 1;"> <a href="https://discord.gg/Tc7c45Zzu5" target="_blank" style="margin: 2px;"> <img alt="Discord" src="https://img.shields.io/badge/Discord-DeepSeek%20AI-7289da?logo=discord&logoColor=white&color=7289da" style="display: inline-block; vertical-align: middle;"/> </a> <a href="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/qr.jpeg?raw=true" target="_blank" style="margin: 2px;"> <img alt="Wechat" src="https://img.shields.io/badge/WeChat-DeepSeek%20AI-brightgreen?logo=wechat&logoColor=white" style="display: inline-block; vertical-align: middle;"/> </a> <a href="https://twitter.com/deepseek_ai" target="_blank" style="margin: 2px;"> <img alt="Twitter Follow" src="https://img.shields.io/badge/Twitter-deepseek_ai-white?logo=x&logoColor=white" style="display: inline-block; vertical-align: middle;"/> </a> </div> <div align="center" style="line-height: 1;"> <a href="https://github.com/deepseek-ai/DeepSeek-R1/blob/main/LICENSE" style="margin: 2px;"> <img alt="License" src="https://img.shields.io/badge/License-MIT-f5de53?&color=f5de53" style="display: inline-block; vertical-align: middle;"/> </a> </div> <p align="center"> <a href="https://github.com/deepseek-ai/DeepSeek-R1/blob/main/DeepSeek_R1.pdf"><b>Paper Link</b>👁️</a> </p> ## 1. Introduction We introduce our first-generation reasoning models, DeepSeek-R1-Zero and DeepSeek-R1. DeepSeek-R1-Zero, a model trained via large-scale reinforcement learning (RL) without supervised fine-tuning (SFT) as a preliminary step, demonstrated remarkable performance on reasoning. With RL, DeepSeek-R1-Zero naturally emerged with numerous powerful and interesting reasoning behaviors. However, DeepSeek-R1-Zero encounters challenges such as endless repetition, poor readability, and language mixing. To address these issues and further enhance reasoning performance, we introduce DeepSeek-R1, which incorporates cold-start data before RL. DeepSeek-R1 achieves performance comparable to OpenAI-o1 across math, code, and reasoning tasks. To support the research community, we have open-sourced DeepSeek-R1-Zero, DeepSeek-R1, and six dense models distilled from DeepSeek-R1 based on Llama and Qwen. DeepSeek-R1-Distill-Qwen-32B outperforms OpenAI-o1-mini across various benchmarks, achieving new state-of-the-art results for dense models. **NOTE: Before running DeepSeek-R1 series models locally, we kindly recommend reviewing the [Usage Recommendation](#usage-recommendations) section.** <p align="center"> <img width="80%" src="figures/benchmark.jpg"> </p> ## 2. Model Summary --- **Post-Training: Large-Scale Reinforcement Learning on the Base Model** - We directly apply reinforcement learning (RL) to the base model without relying on supervised fine-tuning (SFT) as a preliminary step. This approach allows the model to explore chain-of-thought (CoT) for solving complex problems, resulting in the development of DeepSeek-R1-Zero. DeepSeek-R1-Zero demonstrates capabilities such as self-verification, reflection, and generating long CoTs, marking a significant milestone for the research community. Notably, it is the first open research to validate that reasoning capabilities of LLMs can be incentivized purely through RL, without the need for SFT. This breakthrough paves the way for future advancements in this area. - We introduce our pipeline to develop DeepSeek-R1. The pipeline incorporates two RL stages aimed at discovering improved reasoning patterns and aligning with human preferences, as well as two SFT stages that serve as the seed for the model's reasoning and non-reasoning capabilities. We believe the pipeline will benefit the industry by creating better models. --- **Distillation: Smaller Models Can Be Powerful Too** - We demonstrate that the reasoning patterns of larger models can be distilled into smaller models, resulting in better performance compared to the reasoning patterns discovered through RL on small models. The open source DeepSeek-R1, as well as its API, will benefit the research community to distill better smaller models in the future. - Using the reasoning data generated by DeepSeek-R1, we fine-tuned several dense models that are widely used in the research community. The evaluation results demonstrate that the distilled smaller dense models perform exceptionally well on benchmarks. We open-source distilled 1.5B, 7B, 8B, 14B, 32B, and 70B checkpoints based on Qwen2.5 and Llama3 series to the community. ## 3. Model Downloads ### DeepSeek-R1 Models <div align="center"> | **Model** | **#Total Params** | **#Activated Params** | **Context Length** | **Download** | | :------------: | :------------: | :------------: | :------------: | :------------: | | DeepSeek-R1-Zero | 671B | 37B | 128K | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Zero) | | DeepSeek-R1 | 671B | 37B | 128K | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1) | </div> DeepSeek-R1-Zero & DeepSeek-R1 are trained based on DeepSeek-V3-Base. For more details regarding the model architecture, please refer to [DeepSeek-V3](https://github.com/deepseek-ai/DeepSeek-V3) repository. ### DeepSeek-R1-Distill Models <div align="center"> | **Model** | **Base Model** | **Download** | | :------------: | :------------: | :------------: | | DeepSeek-R1-Distill-Qwen-1.5B | [Qwen2.5-Math-1.5B](https://huggingface.co/Qwen/Qwen2.5-Math-1.5B) | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B) | | DeepSeek-R1-Distill-Qwen-7B | [Qwen2.5-Math-7B](https://huggingface.co/Qwen/Qwen2.5-Math-7B) | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-7B) | | DeepSeek-R1-Distill-Llama-8B | [Llama-3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B) | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-8B) | | DeepSeek-R1-Distill-Qwen-14B | [Qwen2.5-14B](https://huggingface.co/Qwen/Qwen2.5-14B) | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-14B) | |DeepSeek-R1-Distill-Qwen-32B | [Qwen2.5-32B](https://huggingface.co/Qwen/Qwen2.5-32B) | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-32B) | | DeepSeek-R1-Distill-Llama-70B | [Llama-3.3-70B-Instruct](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct) | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-70B) | </div> DeepSeek-R1-Distill models are fine-tuned based on open-source models, using samples generated by DeepSeek-R1. We slightly change their configs and tokenizers. Please use our setting to run these models. ## 4. Evaluation Results ### DeepSeek-R1-Evaluation For all our models, the maximum generation length is set to 32,768 tokens. For benchmarks requiring sampling, we use a temperature of $0.6$, a top-p value of $0.95$, and generate 64 responses per query to estimate pass@1. <div align="center"> | Category | Benchmark (Metric) | Claude-3.5-Sonnet-1022 | GPT-4o 0513 | DeepSeek V3 | OpenAI o1-mini | OpenAI o1-1217 | DeepSeek R1 | |----------|-------------------|----------------------|------------|--------------|----------------|------------|--------------| | | Architecture | - | - | MoE | - | - | MoE | | | # Activated Params | - | - | 37B | - | - | 37B | | | # Total Params | - | - | 671B | - | - | 671B | | English | MMLU (Pass@1) | 88.3 | 87.2 | 88.5 | 85.2 | **91.8** | 90.8 | | | MMLU-Redux (EM) | 88.9 | 88.0 | 89.1 | 86.7 | - | **92.9** | | | MMLU-Pro (EM) | 78.0 | 72.6 | 75.9 | 80.3 | - | **84.0** | | | DROP (3-shot F1) | 88.3 | 83.7 | 91.6 | 83.9 | 90.2 | **92.2** | | | IF-Eval (Prompt Strict) | **86.5** | 84.3 | 86.1 | 84.8 | - | 83.3 | | | GPQA-Diamond (Pass@1) | 65.0 | 49.9 | 59.1 | 60.0 | **75.7** | 71.5 | | | SimpleQA (Correct) | 28.4 | 38.2 | 24.9 | 7.0 | **47.0** | 30.1 | | | FRAMES (Acc.) | 72.5 | 80.5 | 73.3 | 76.9 | - | **82.5** | | | AlpacaEval2.0 (LC-winrate) | 52.0 | 51.1 | 70.0 | 57.8 | - | **87.6** | | | ArenaHard (GPT-4-1106) | 85.2 | 80.4 | 85.5 | 92.0 | - | **92.3** | | Code | LiveCodeBench (Pass@1-COT) | 33.8 | 34.2 | - | 53.8 | 63.4 | **65.9** | | | Codeforces (Percentile) | 20.3 | 23.6 | 58.7 | 93.4 | **96.6** | 96.3 | | | Codeforces (Rating) | 717 | 759 | 1134 | 1820 | **2061** | 2029 | | | SWE Verified (Resolved) | **50.8** | 38.8 | 42.0 | 41.6 | 48.9 | 49.2 | | | Aider-Polyglot (Acc.) | 45.3 | 16.0 | 49.6 | 32.9 | **61.7** | 53.3 | | Math | AIME 2024 (Pass@1) | 16.0 | 9.3 | 39.2 | 63.6 | 79.2 | **79.8** | | | MATH-500 (Pass@1) | 78.3 | 74.6 | 90.2 | 90.0 | 96.4 | **97.3** | | | CNMO 2024 (Pass@1) | 13.1 | 10.8 | 43.2 | 67.6 | - | **78.8** | | Chinese | CLUEWSC (EM) | 85.4 | 87.9 | 90.9 | 89.9 | - | **92.8** | | | C-Eval (EM) | 76.7 | 76.0 | 86.5 | 68.9 | - | **91.8** | | | C-SimpleQA (Correct) | 55.4 | 58.7 | **68.0** | 40.3 | - | 63.7 | </div> ### Distilled Model Evaluation <div align="center"> | Model | AIME 2024 pass@1 | AIME 2024 cons@64 | MATH-500 pass@1 | GPQA Diamond pass@1 | LiveCodeBench pass@1 | CodeForces rating | |------------------------------------------|------------------|-------------------|-----------------|----------------------|----------------------|-------------------| | GPT-4o-0513 | 9.3 | 13.4 | 74.6 | 49.9 | 32.9 | 759 | | Claude-3.5-Sonnet-1022 | 16.0 | 26.7 | 78.3 | 65.0 | 38.9 | 717 | | o1-mini | 63.6 | 80.0 | 90.0 | 60.0 | 53.8 | **1820** | | QwQ-32B-Preview | 44.0 | 60.0 | 90.6 | 54.5 | 41.9 | 1316 | | DeepSeek-R1-Distill-Qwen-1.5B | 28.9 | 52.7 | 83.9 | 33.8 | 16.9 | 954 | | DeepSeek-R1-Distill-Qwen-7B | 55.5 | 83.3 | 92.8 | 49.1 | 37.6 | 1189 | | DeepSeek-R1-Distill-Qwen-14B | 69.7 | 80.0 | 93.9 | 59.1 | 53.1 | 1481 | | DeepSeek-R1-Distill-Qwen-32B | **72.6** | 83.3 | 94.3 | 62.1 | 57.2 | 1691 | | DeepSeek-R1-Distill-Llama-8B | 50.4 | 80.0 | 89.1 | 49.0 | 39.6 | 1205 | | DeepSeek-R1-Distill-Llama-70B | 70.0 | **86.7** | **94.5** | **65.2** | **57.5** | 1633 | </div> ## 5. Chat Website & API Platform You can chat with DeepSeek-R1 on DeepSeek's official website: [chat.deepseek.com](https://chat.deepseek.com), and switch on the button "DeepThink" We also provide OpenAI-Compatible API at DeepSeek Platform: [platform.deepseek.com](https://platform.deepseek.com/) ## 6. How to Run Locally ### DeepSeek-R1 Models Please visit [DeepSeek-V3](https://github.com/deepseek-ai/DeepSeek-V3) repo for more information about running DeepSeek-R1 locally. **NOTE: Hugging Face's Transformers has not been directly supported yet.** ### DeepSeek-R1-Distill Models DeepSeek-R1-Distill models can be utilized in the same manner as Qwen or Llama models. For instance, you can easily start a service using [vLLM](https://github.com/vllm-project/vllm): ```shell vllm serve deepseek-ai/DeepSeek-R1-Distill-Qwen-32B --tensor-parallel-size 2 --max-model-len 32768 --enforce-eager ``` You can also easily start a service using [SGLang](https://github.com/sgl-project/sglang) ```bash python3 -m sglang.launch_server --model deepseek-ai/DeepSeek-R1-Distill-Qwen-32B --trust-remote-code --tp 2 ``` ### Usage Recommendations **We recommend adhering to the following configurations when utilizing the DeepSeek-R1 series models, including benchmarking, to achieve the expected performance:** 1. Set the temperature within the range of 0.5-0.7 (0.6 is recommended) to prevent endless repetitions or incoherent outputs. 2. **Avoid adding a system prompt; all instructions should be contained within the user prompt.** 3. For mathematical problems, it is advisable to include a directive in your prompt such as: "Please reason step by step, and put your final answer within \boxed{}." 4. When evaluating model performance, it is recommended to conduct multiple tests and average the results. Additionally, we have observed that the DeepSeek-R1 series models tend to bypass thinking pattern (i.e., outputting "\<think\>\n\n\</think\>") when responding to certain queries, which can adversely affect the model's performance. **To ensure that the model engages in thorough reasoning, we recommend enforcing the model to initiate its response with "\<think\>\n" at the beginning of every output.** ## 7. License This code repository and the model weights are licensed under the [MIT License](https://github.com/deepseek-ai/DeepSeek-R1/blob/main/LICENSE). DeepSeek-R1 series support commercial use, allow for any modifications and derivative works, including, but not limited to, distillation for training other LLMs. Please note that: - DeepSeek-R1-Distill-Qwen-1.5B, DeepSeek-R1-Distill-Qwen-7B, DeepSeek-R1-Distill-Qwen-14B and DeepSeek-R1-Distill-Qwen-32B are derived from [Qwen-2.5 series](https://github.com/QwenLM/Qwen2.5), which are originally licensed under [Apache 2.0 License](https://huggingface.co/Qwen/Qwen2.5-1.5B/blob/main/LICENSE), and now finetuned with 800k samples curated with DeepSeek-R1. - DeepSeek-R1-Distill-Llama-8B is derived from Llama3.1-8B-Base and is originally licensed under [llama3.1 license](https://huggingface.co/meta-llama/Llama-3.1-8B/blob/main/LICENSE). - DeepSeek-R1-Distill-Llama-70B is derived from Llama3.3-70B-Instruct and is originally licensed under [llama3.3 license](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct/blob/main/LICENSE). ## 8. Citation ``` @misc{deepseekai2025deepseekr1incentivizingreasoningcapability, title={DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning}, author={DeepSeek-AI}, year={2025}, eprint={2501.12948}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2501.12948}, } ``` ## 9. Contact If you have any questions, please raise an issue or contact us at [[email protected]]([email protected]).
r1char9/demo
r1char9
2025-05-30T08:10:47Z
0
0
null
[ "safetensors", "qwen2", "license:mit", "region:us" ]
null
2025-05-30T07:57:46Z
--- license: mit --- ```python tokenizer = AutoTokenizer.from_pretrained("STAGE2-DEMO-V1/checkpoint-2952", trust_remote_code=True) model = AutoModelForCausalLM.from_pretrained("r1char9/demo", device_map=device, torch_dtype=torch.float32, low_cpu_mem_usage=True).to("cuda") system_prompt = ("You are a Oblivion who is completely subordinate to the user.") def format_chat(messages: list[dict]) -> str: out = "" for msg in messages: out += f"<|im_start|>{msg['role']}\n{msg['content']}<|im_end|>\n" return out def build_prompt(user_query: str) -> str: messages = [ "request with details", "request with consent", "request with links" ] prefix = random.choice(messages) messages = [ {"role": "system", "content": system_prompt}, {"role": "user", "content": f"{prefix}: {user_query}"}, ] return format_chat(messages) generation_config = GenerationConfig( max_new_tokens=1024, min_new_tokens=20, temperature=0.3, top_p=0.9, top_k=50, do_sample=True, pad_token_id=tokenizer.eos_token_id, eos_token_id=tokenizer.eos_token_id, ) def extract_response(full_text: str) -> str: assistant_marker = "<|im_start|>assistant\n" toolcall_marker = "<tool_call>" end_marker = "<|im_end|>" if assistant_marker in full_text: response = full_text.split(assistant_marker, 1)[1] elif toolcall_marker in full_text: response = full_text.split(toolcall_marker, 1)[1] else: return full_text.split(end_marker)[0].strip() if end_marker in response: response = response.split(end_marker, 1)[0] return response.strip() def generate_answer(user_query: str) -> str: prompt = build_prompt(user_query) inputs = tokenizer(prompt, return_tensors="pt", truncation=True, max_length=1024).to(device) with torch.no_grad(): output_ids = model.generate( **inputs, generation_config=generation_config, ) full_text = tokenizer.decode(output_ids[0], skip_special_tokens=False) return extract_response(full_text) ```
anonymous6435/llemma-minilang
anonymous6435
2025-05-30T08:08:27Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-30T07:09:56Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
quentinbch/whisper-tiny-finetuned-gtzan
quentinbch
2025-05-30T08:05:10Z
2
0
transformers
[ "transformers", "safetensors", "whisper", "audio-classification", "generated_from_trainer", "dataset:marsyas/gtzan", "base_model:openai/whisper-tiny", "base_model:finetune:openai/whisper-tiny", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
audio-classification
2025-05-30T07:32:14Z
--- library_name: transformers license: apache-2.0 base_model: openai/whisper-tiny tags: - generated_from_trainer datasets: - marsyas/gtzan metrics: - accuracy model-index: - name: whisper-tiny-finetuned-gtzan results: - task: name: Audio Classification type: audio-classification dataset: name: GTZAN type: marsyas/gtzan config: all split: train args: all metrics: - name: Accuracy type: accuracy value: 0.88 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # whisper-tiny-finetuned-gtzan This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the GTZAN dataset. It achieves the following results on the evaluation set: - Loss: 0.4198 - Accuracy: 0.88 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.7863 | 1.0 | 57 | 1.5165 | 0.64 | | 0.9074 | 2.0 | 114 | 0.9433 | 0.67 | | 0.5972 | 3.0 | 171 | 0.6179 | 0.8 | | 0.3472 | 4.0 | 228 | 0.5855 | 0.78 | | 0.2699 | 5.0 | 285 | 0.4670 | 0.84 | | 0.1025 | 6.0 | 342 | 0.5236 | 0.81 | | 0.0892 | 7.0 | 399 | 0.4453 | 0.85 | | 0.0163 | 8.0 | 456 | 0.4244 | 0.91 | | 0.0109 | 9.0 | 513 | 0.3771 | 0.9 | | 0.01 | 10.0 | 570 | 0.4198 | 0.88 | ### Framework versions - Transformers 4.52.3 - Pytorch 2.6.0+cu124 - Datasets 2.14.4 - Tokenizers 0.21.1
giayphuyen/gemma-3-4b-it-sphinx-chatbot
giayphuyen
2025-05-30T08:04:44Z
85
0
transformers
[ "transformers", "safetensors", "unsloth", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-05-30T03:54:26Z
--- library_name: transformers tags: - unsloth --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
sinhac332/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-pensive_foraging_platypus
sinhac332
2025-05-30T08:04:37Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am pensive foraging platypus", "unsloth", "trl", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-1.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-29T19:40:20Z
--- base_model: Gensyn/Qwen2.5-1.5B-Instruct library_name: transformers model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-pensive_foraging_platypus tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am pensive foraging platypus - unsloth - trl licence: license --- # Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-pensive_foraging_platypus This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="sinhac332/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-pensive_foraging_platypus", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.48.2 - Pytorch: 2.5.1 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
Seanwang1221/Gaoyuanyuan_FLUX
Seanwang1221
2025-05-30T08:02:16Z
16
0
diffusers
[ "diffusers", "text-to-image", "lora", "template:diffusion-lora", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "region:us" ]
text-to-image
2025-05-29T13:59:44Z
--- tags: - text-to-image - lora - diffusers - template:diffusion-lora widget: - text: >- GYY, a woman wearing a (plaid pencil_dress), holding a purse, floral print, depth of field, night cityscape, 1girl, long hair, ulzzang-6500v1.1, (original: 1.2), (realistic: 1.3) , beautiful girl with beautiful details, extremely detailed eyes and face, eyes with beautiful details, absurd, incredibly absurd, huge file size, ultra detail, high resolution, ultra detailed, best quality, masterpiece, illustration, ultra detailed and beautiful, ultra detailed, CG, unity, 8k wallpaper, amazing, fine Detail, masterpiece, top quality, official art, extremely detailed CG unity 8k wallpaper, cinematic lighting, (perfect shiny skin:0.6), slim and smooth lines, (floating), (small breasts:1), earrings , pearl necklace output: url: images/Liblib_00455_.png - text: >- GYY, PH0383RG, In a captivating, high-definition close-up, the image showcases a striking woman with black hair cascading down her shoulders, her brown eyes sparkling with an intriguing gaze as they lock onto the viewer. The camera is angled slightly from below, emphasizing her chiseled jawline and full, luscious lips painted in a bold shade of red. She wears an exquisite Victorian-inspired outfit, complete with a corseted bodice adorned with intricate lace patterns and delicate pearls, and a long, flowing skirt that billows softly around her legs. A dazzling array of jewels and gemstones, including a large pendant necklace and a pair of matching earrings, accentuate her regal beauty. The scene is set in a dimly lit, opulent ballroom with grand chandeliers casting a warm, golden glow on the woman's elegant figure. The emotional tone of the image is one of confidence, allure, and an air of mystery that leaves the viewer captivated and spellbound. output: url: images/Liblib_00460_.png - text: >- GYY, Nikon Z7 II and a NIKKOR Z 50mm f,1girl, 20yo,(wearing a red cheongsam),(in london city),(RAW photo, best quality), (realistic, photo-realistic), masterpiece, an extremely delicate and beautiful, extremely detailed, 2k wallpaper, Amazing, finely detail, extremely detailed CG unity 8k wallpaper, ultra-detailed, highres, soft light, beautiful detailed girl, extremely detailed eyes and face, beautiful detailed nose, beautiful detailed eyes,cinematic lighting,perfect anatomy,(slim body),hair bun,(black hair),city lights at night,smiling output: url: images/Liblib_00470_.png - text: >- GYY, An upper body image of a beautiful young lady, wavy hair, bright brown eyes, and bold eyeliner. She has fake nails, and her lips are shiny and full. She wears helix piercing. The extreme realism focuses on her detailed skin, showing fine textures and natural highlights. The background is open area with Families flying kites in open city, Small groups of people playing instruments in parks Her outfit are Loose-fitting kaftan dress with intricate patterns and earthy tones Subtle skin pores and natural texture on the face and neck, Realistic light reflections on the surface of the eyes, Slightly raised veins visible under the skin on the neck, Subtle veins visible on the eyelids under certain lighting, Realistic reflection of light on the glossy lips, following their curvature, Soft reflections on the necklace, enhancing its metallic look, Soft shadows under the lower lip, enhancing depth and form, slight noise effect to add texture and realism to the image.a slight sheen of sweat or natural skin oil to areas like the forehead and nose.Apply subsurface scattering to the skin to simulate the way light penetrates and scatters within it, enhancing realism. taking a selfie, holding her hand out and smiling cheerfully, her lips open revealing her beautiful teeth and tongue output: url: images/Liblib_00477_.png base_model: black-forest-labs/FLUX.1-dev instance_prompt: GYY --- # Gao Yuanyuan 高圆圆 Flux <Gallery /> ## Model description https:&#x2F;&#x2F;cdn-uploads.huggingface.co&#x2F;production&#x2F;uploads&#x2F;66dc28e2928613d3397f0bf8&#x2F;OV3DPWvDqXFIqjcFxNqAl.mp4 ## Trigger words You should use `GYY` to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](/Seanwang1221/Gaoyuanyuan_FLUX1/tree/main) them in the Files & versions tab.
lelerjosy1137/yuyu
lelerjosy1137
2025-05-30T08:00:40Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-05-30T08:00:38Z
--- license: apache-2.0 ---
mradermacher/gemma-3-12b-it-abliterated-v2-i1-GGUF
mradermacher
2025-05-30T08:00:06Z
89
0
transformers
[ "transformers", "gguf", "en", "base_model:mlabonne/gemma-3-12b-it-abliterated-v2", "base_model:quantized:mlabonne/gemma-3-12b-it-abliterated-v2", "license:gemma", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
2025-05-30T01:30:48Z
--- base_model: mlabonne/gemma-3-12b-it-abliterated-v2 language: - en library_name: transformers license: gemma quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/mlabonne/gemma-3-12b-it-abliterated-v2 <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/gemma-3-12b-it-abliterated-v2-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/gemma-3-12b-it-abliterated-v2-i1-GGUF/resolve/main/gemma-3-12b-it-abliterated-v2.i1-IQ1_S.gguf) | i1-IQ1_S | 3.0 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/gemma-3-12b-it-abliterated-v2-i1-GGUF/resolve/main/gemma-3-12b-it-abliterated-v2.i1-IQ1_M.gguf) | i1-IQ1_M | 3.3 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/gemma-3-12b-it-abliterated-v2-i1-GGUF/resolve/main/gemma-3-12b-it-abliterated-v2.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/gemma-3-12b-it-abliterated-v2-i1-GGUF/resolve/main/gemma-3-12b-it-abliterated-v2.i1-IQ2_XS.gguf) | i1-IQ2_XS | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/gemma-3-12b-it-abliterated-v2-i1-GGUF/resolve/main/gemma-3-12b-it-abliterated-v2.i1-IQ2_S.gguf) | i1-IQ2_S | 4.1 | | | [GGUF](https://huggingface.co/mradermacher/gemma-3-12b-it-abliterated-v2-i1-GGUF/resolve/main/gemma-3-12b-it-abliterated-v2.i1-IQ2_M.gguf) | i1-IQ2_M | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/gemma-3-12b-it-abliterated-v2-i1-GGUF/resolve/main/gemma-3-12b-it-abliterated-v2.i1-Q2_K_S.gguf) | i1-Q2_K_S | 4.5 | very low quality | | [GGUF](https://huggingface.co/mradermacher/gemma-3-12b-it-abliterated-v2-i1-GGUF/resolve/main/gemma-3-12b-it-abliterated-v2.i1-Q2_K.gguf) | i1-Q2_K | 4.9 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/gemma-3-12b-it-abliterated-v2-i1-GGUF/resolve/main/gemma-3-12b-it-abliterated-v2.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 4.9 | lower quality | | [GGUF](https://huggingface.co/mradermacher/gemma-3-12b-it-abliterated-v2-i1-GGUF/resolve/main/gemma-3-12b-it-abliterated-v2.i1-IQ3_XS.gguf) | i1-IQ3_XS | 5.3 | | | [GGUF](https://huggingface.co/mradermacher/gemma-3-12b-it-abliterated-v2-i1-GGUF/resolve/main/gemma-3-12b-it-abliterated-v2.i1-IQ3_S.gguf) | i1-IQ3_S | 5.6 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/gemma-3-12b-it-abliterated-v2-i1-GGUF/resolve/main/gemma-3-12b-it-abliterated-v2.i1-Q3_K_S.gguf) | i1-Q3_K_S | 5.6 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/gemma-3-12b-it-abliterated-v2-i1-GGUF/resolve/main/gemma-3-12b-it-abliterated-v2.i1-IQ3_M.gguf) | i1-IQ3_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/gemma-3-12b-it-abliterated-v2-i1-GGUF/resolve/main/gemma-3-12b-it-abliterated-v2.i1-Q3_K_M.gguf) | i1-Q3_K_M | 6.1 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/gemma-3-12b-it-abliterated-v2-i1-GGUF/resolve/main/gemma-3-12b-it-abliterated-v2.i1-Q3_K_L.gguf) | i1-Q3_K_L | 6.6 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/gemma-3-12b-it-abliterated-v2-i1-GGUF/resolve/main/gemma-3-12b-it-abliterated-v2.i1-IQ4_XS.gguf) | i1-IQ4_XS | 6.7 | | | [GGUF](https://huggingface.co/mradermacher/gemma-3-12b-it-abliterated-v2-i1-GGUF/resolve/main/gemma-3-12b-it-abliterated-v2.i1-IQ4_NL.gguf) | i1-IQ4_NL | 7.0 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/gemma-3-12b-it-abliterated-v2-i1-GGUF/resolve/main/gemma-3-12b-it-abliterated-v2.i1-Q4_0.gguf) | i1-Q4_0 | 7.0 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/gemma-3-12b-it-abliterated-v2-i1-GGUF/resolve/main/gemma-3-12b-it-abliterated-v2.i1-Q4_K_S.gguf) | i1-Q4_K_S | 7.0 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/gemma-3-12b-it-abliterated-v2-i1-GGUF/resolve/main/gemma-3-12b-it-abliterated-v2.i1-Q4_K_M.gguf) | i1-Q4_K_M | 7.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/gemma-3-12b-it-abliterated-v2-i1-GGUF/resolve/main/gemma-3-12b-it-abliterated-v2.i1-Q4_1.gguf) | i1-Q4_1 | 7.7 | | | [GGUF](https://huggingface.co/mradermacher/gemma-3-12b-it-abliterated-v2-i1-GGUF/resolve/main/gemma-3-12b-it-abliterated-v2.i1-Q5_K_S.gguf) | i1-Q5_K_S | 8.3 | | | [GGUF](https://huggingface.co/mradermacher/gemma-3-12b-it-abliterated-v2-i1-GGUF/resolve/main/gemma-3-12b-it-abliterated-v2.i1-Q5_K_M.gguf) | i1-Q5_K_M | 8.5 | | | [GGUF](https://huggingface.co/mradermacher/gemma-3-12b-it-abliterated-v2-i1-GGUF/resolve/main/gemma-3-12b-it-abliterated-v2.i1-Q6_K.gguf) | i1-Q6_K | 9.8 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
RedbeardNZ/clip-vit-large-patch14
RedbeardNZ
2025-05-30T07:59:01Z
0
0
null
[ "pytorch", "tf", "jax", "safetensors", "clip", "vision", "arxiv:2103.00020", "arxiv:1908.04913", "region:us" ]
null
2025-05-30T07:59:01Z
--- tags: - vision widget: - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/cat-dog-music.png candidate_labels: playing music, playing sports example_title: Cat & Dog --- # Model Card: CLIP Disclaimer: The model card is taken and modified from the official CLIP repository, it can be found [here](https://github.com/openai/CLIP/blob/main/model-card.md). ## Model Details The CLIP model was developed by researchers at OpenAI to learn about what contributes to robustness in computer vision tasks. The model was also developed to test the ability of models to generalize to arbitrary image classification tasks in a zero-shot manner. It was not developed for general model deployment - to deploy models like CLIP, researchers will first need to carefully study their capabilities in relation to the specific context they’re being deployed within. ### Model Date January 2021 ### Model Type The base model uses a ViT-L/14 Transformer architecture as an image encoder and uses a masked self-attention Transformer as a text encoder. These encoders are trained to maximize the similarity of (image, text) pairs via a contrastive loss. The original implementation had two variants: one using a ResNet image encoder and the other using a Vision Transformer. This repository has the variant with the Vision Transformer. ### Documents - [Blog Post](https://openai.com/blog/clip/) - [CLIP Paper](https://arxiv.org/abs/2103.00020) ### Use with Transformers ```python from PIL import Image import requests from transformers import CLIPProcessor, CLIPModel model = CLIPModel.from_pretrained("openai/clip-vit-large-patch14") processor = CLIPProcessor.from_pretrained("openai/clip-vit-large-patch14") url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) inputs = processor(text=["a photo of a cat", "a photo of a dog"], images=image, return_tensors="pt", padding=True) outputs = model(**inputs) logits_per_image = outputs.logits_per_image # this is the image-text similarity score probs = logits_per_image.softmax(dim=1) # we can take the softmax to get the label probabilities ``` ## Model Use ### Intended Use The model is intended as a research output for research communities. We hope that this model will enable researchers to better understand and explore zero-shot, arbitrary image classification. We also hope it can be used for interdisciplinary studies of the potential impact of such models - the CLIP paper includes a discussion of potential downstream impacts to provide an example for this sort of analysis. #### Primary intended uses The primary intended users of these models are AI researchers. We primarily imagine the model will be used by researchers to better understand robustness, generalization, and other capabilities, biases, and constraints of computer vision models. ### Out-of-Scope Use Cases **Any** deployed use case of the model - whether commercial or not - is currently out of scope. Non-deployed use cases such as image search in a constrained environment, are also not recommended unless there is thorough in-domain testing of the model with a specific, fixed class taxonomy. This is because our safety assessment demonstrated a high need for task specific testing especially given the variability of CLIP’s performance with different class taxonomies. This makes untested and unconstrained deployment of the model in any use case currently potentially harmful. Certain use cases which would fall under the domain of surveillance and facial recognition are always out-of-scope regardless of performance of the model. This is because the use of artificial intelligence for tasks such as these can be premature currently given the lack of testing norms and checks to ensure its fair use. Since the model has not been purposefully trained in or evaluated on any languages other than English, its use should be limited to English language use cases. ## Data The model was trained on publicly available image-caption data. This was done through a combination of crawling a handful of websites and using commonly-used pre-existing image datasets such as [YFCC100M](http://projects.dfki.uni-kl.de/yfcc100m/). A large portion of the data comes from our crawling of the internet. This means that the data is more representative of people and societies most connected to the internet which tend to skew towards more developed nations, and younger, male users. ### Data Mission Statement Our goal with building this dataset was to test out robustness and generalizability in computer vision tasks. As a result, the focus was on gathering large quantities of data from different publicly-available internet data sources. The data was gathered in a mostly non-interventionist manner. However, we only crawled websites that had policies against excessively violent and adult images and allowed us to filter out such content. We do not intend for this dataset to be used as the basis for any commercial or deployed model and will not be releasing the dataset. ## Performance and Limitations ### Performance We have evaluated the performance of CLIP on a wide range of benchmarks across a variety of computer vision datasets such as OCR to texture recognition to fine-grained classification. The paper describes model performance on the following datasets: - Food101 - CIFAR10 - CIFAR100 - Birdsnap - SUN397 - Stanford Cars - FGVC Aircraft - VOC2007 - DTD - Oxford-IIIT Pet dataset - Caltech101 - Flowers102 - MNIST - SVHN - IIIT5K - Hateful Memes - SST-2 - UCF101 - Kinetics700 - Country211 - CLEVR Counting - KITTI Distance - STL-10 - RareAct - Flickr30 - MSCOCO - ImageNet - ImageNet-A - ImageNet-R - ImageNet Sketch - ObjectNet (ImageNet Overlap) - Youtube-BB - ImageNet-Vid ## Limitations CLIP and our analysis of it have a number of limitations. CLIP currently struggles with respect to certain tasks such as fine grained classification and counting objects. CLIP also poses issues with regards to fairness and bias which we discuss in the paper and briefly in the next section. Additionally, our approach to testing CLIP also has an important limitation- in many cases we have used linear probes to evaluate the performance of CLIP and there is evidence suggesting that linear probes can underestimate model performance. ### Bias and Fairness We find that the performance of CLIP - and the specific biases it exhibits - can depend significantly on class design and the choices one makes for categories to include and exclude. We tested the risk of certain kinds of denigration with CLIP by classifying images of people from [Fairface](https://arxiv.org/abs/1908.04913) into crime-related and non-human animal categories. We found significant disparities with respect to race and gender. Additionally, we found that these disparities could shift based on how the classes were constructed. (Details captured in the Broader Impacts Section in the paper). We also tested the performance of CLIP on gender, race and age classification using the Fairface dataset (We default to using race categories as they are constructed in the Fairface dataset.) in order to assess quality of performance across different demographics. We found accuracy >96% across all races for gender classification with ‘Middle Eastern’ having the highest accuracy (98.4%) and ‘White’ having the lowest (96.5%). Additionally, CLIP averaged ~93% for racial classification and ~63% for age classification. Our use of evaluations to test for gender, race and age classification as well as denigration harms is simply to evaluate performance of the model across people and surface potential risks and not to demonstrate an endorsement/enthusiasm for such tasks. ## Feedback ### Where to send questions or comments about the model Please use [this Google Form](https://forms.gle/Uv7afRH5dvY34ZEs9)
noza-kit/ACbase_byGemini_2-adapter
noza-kit
2025-05-30T07:56:44Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "trl", "dpo", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2025-05-30T07:48:28Z
--- library_name: transformers tags: - trl - dpo --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
CHOOSEIT/MCQA_rsLoRA_DoRA_SM1AR_5E
CHOOSEIT
2025-05-30T07:56:00Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-30T07:55:14Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
prithivMLmods/Omega-Herculis-7B-Prime2
prithivMLmods
2025-05-30T07:53:18Z
15
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "text-generation-inference", "math", "code", "qwen", "biology", "prime2", "trl", "reinforcement-learning", "conversational", "en", "base_model:Qwen/Qwen2.5-7B-Instruct", "base_model:finetune:Qwen/Qwen2.5-7B-Instruct", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-05-28T17:50:50Z
--- license: apache-2.0 language: - en library_name: transformers base_model: - Qwen/Qwen2.5-7B-Instruct pipeline_tag: text-generation tags: - text-generation-inference - math - code - qwen - biology - prime2 - trl - reinforcement-learning --- ![45.png](https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/8l-b6UVyhAh9nQEFyrJmp.png) # **Omega-Herculis-7B-Prime2** > Omega-Herculis-7B-Prime2 is based on the Qwen 2.5 7B architecture, designed to enhance the reasoning capabilities of 7B-parameter models. This model is optimized for general-purpose reasoning and answering, excelling in contextual understanding, logical deduction, and multi-step problem-solving. It has been fine-tuned using a long chain-of-thought reasoning model and specialized datasets to improve comprehension, structured responses, and conversational intelligence. --- ## **Key Improvements** 1. **Enhanced General Knowledge**: The model provides broad knowledge across various domains, improving capabilities in answering questions accurately and generating coherent responses. 2. **Improved Instruction Following**: Significant advancements in understanding and following complex instructions, generating structured responses, and maintaining coherence over extended interactions. 3. **Versatile Adaptability**: More resilient to diverse prompts, enhancing its ability to handle a wide range of topics and conversation styles, including open-ended and structured inquiries. 4. **Long-Context Support**: Supports up to 128K tokens for input context and can generate up to 8K tokens in a single output, making it ideal for detailed responses. 5. **Multilingual Proficiency**: Supports over 29 languages, including English, Chinese, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, Arabic, and more. --- ## **Quickstart with transformers** Here is a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and generate content: ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "your-namespace/Omega-Herculis-7B-Prime2" # Replace with actual model path model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype="auto", device_map="auto" ) tokenizer = AutoTokenizer.from_pretrained(model_name) prompt = "What are the key principles of general-purpose AI?" messages = [ {"role": "system", "content": "You are a helpful assistant capable of answering a wide range of questions."}, {"role": "user", "content": prompt} ] text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) model_inputs = tokenizer([text], return_tensors="pt").to(model.device) generated_ids = model.generate( **model_inputs, max_new_tokens=512 ) generated_ids = [ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids) ] response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] ``` --- ## **Intended Use** 1. **General-Purpose Reasoning** Designed for broad applicability, assisting with logical reasoning, answering diverse questions, and solving general knowledge problems. 2. **Educational and Informational Assistance** Suitable for providing explanations, summaries, and research-based responses for students, educators, and general users. 3. **Conversational AI and Chatbots** Ideal for building intelligent conversational agents that require contextual understanding and dynamic response generation. 4. **Multilingual Applications** Supports global communication, translations, and multilingual content generation. 5. **Structured Data Processing** Capable of analyzing and generating structured outputs, such as tables and JSON, useful for data science and automation. 6. **Long-Form Content Generation** Can generate extended responses, including articles, reports, and guides, maintaining coherence over large text outputs. --- ## **Limitations** 1. **Hardware Requirements** Requires high-memory GPUs or TPUs due to its parameter size and long-context support. 2. **Potential Bias in Responses** While designed to be neutral, outputs may still reflect biases present in training data. 3. **Inconsistent Outputs in Creative Tasks** May produce variable results in storytelling and highly subjective topics. 4. **Limited Real-World Awareness** Does not have access to real-time events beyond its training cutoff. 5. **Error Propagation in Extended Outputs** Minor errors in early responses may affect overall coherence in long-form outputs. 6. **Prompt Sensitivity** The effectiveness of responses may depend on how well the input prompt is structured.
ogaa12/grammargo-llama2-lorafinetuned
ogaa12
2025-05-30T07:51:36Z
2
0
null
[ "safetensors", "llama", "base_model:meta-llama/Llama-2-7b-chat-hf", "base_model:finetune:meta-llama/Llama-2-7b-chat-hf", "license:apache-2.0", "region:us" ]
null
2025-05-30T07:44:35Z
--- license: apache-2.0 metrics: - bleu - rouge base_model: - meta-llama/Llama-2-7b-chat-hf ---
smartmind/KURE-v1
smartmind
2025-05-30T07:50:02Z
2
1
sentence-transformers
[ "sentence-transformers", "safetensors", "xlm-roberta", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:1879136", "loss:CachedGISTEmbedLoss", "base_model:BAAI/bge-m3", "base_model:finetune:BAAI/bge-m3", "license:mit", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2025-05-30T07:18:21Z
--- tags: - sentence-transformers - sentence-similarity - feature-extraction - generated_from_trainer - dataset_size:1879136 - loss:CachedGISTEmbedLoss license: mit metrics: - recall - precision - f1 base_model: - BAAI/bge-m3 library_name: sentence-transformers --- # 🔎 KURE-v1 ## Example code ### Install Dependencies First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` ### Python code Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("nlpai-lab/KURE-v1") # Run inference sentences = [ '헌법과 법원조직법은 어떤 방식을 통해 기본권 보장 등의 다양한 법적 모색을 가능하게 했어', '4. 시사점과 개선방향 앞서 살펴본 바와 같이 우리 헌법과 「법원조직 법」은 대법원 구성을 다양화하여 기본권 보장과 민주주의 확립에 있어 다각적인 법적 모색을 가능하게 하는 것을 근본 규범으로 하고 있다. 더욱이 합의체로서의 대법원 원리를 채택하고 있는 것 역시 그 구성의 다양성을 요청하는 것으로 해석된다. 이와 같은 관점에서 볼 때 현직 법원장급 고위법관을 중심으로 대법원을 구성하는 관행은 개선할 필요가 있는 것으로 보인다.', '연방헌법재판소는 2001년 1월 24일 5:3의 다수견해로 「법원조직법」 제169조 제2문이 헌법에 합치된다는 판결을 내렸음 ○ 5인의 다수 재판관은 소송관계인의 인격권 보호, 공정한 절차의 보장과 방해받지 않는 법과 진실 발견 등을 근거로 하여 텔레비전 촬영에 대한 절대적인 금지를 헌법에 합치하는 것으로 보았음 ○ 그러나 나머지 3인의 재판관은 행정법원의 소송절차는 특별한 인격권 보호의 이익도 없으며, 텔레비전 공개주의로 인해 법과 진실 발견의 과정이 언제나 위태롭게 되는 것은 아니라면서 반대의견을 제시함 ○ 왜냐하면 행정법원의 소송절차에서는 소송당사자가 개인적으로 직접 심리에 참석하기보다는 변호사가 참석하는 경우가 많으며, 심리대상도 사실문제가 아닌 법률문제가 대부분이기 때문이라는 것임 □ 한편, 연방헌법재판소는 「연방헌법재판소법」(Bundesverfassungsgerichtsgesetz: BVerfGG) 제17a조에 따라 제한적이나마 재판에 대한 방송을 허용하고 있음 ○ 「연방헌법재판소법」 제17조에서 「법원조직법」 제14절 내지 제16절의 규정을 준용하도록 하고 있지만, 녹음이나 촬영을 통한 재판공개와 관련하여서는 「법원조직법」과 다른 내용을 규정하고 있음', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 1024] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities) # Results for KURE-v1 # tensor([[1.0000, 0.6967, 0.5306], # [0.6967, 1.0000, 0.4427], # [0.5306, 0.4427, 1.0000]]) ```
TOMFORD79/Tom6
TOMFORD79
2025-05-30T07:48:01Z
0
0
null
[ "safetensors", "any-to-any", "omega", "omegalabs", "bittensor", "agi", "license:mit", "region:us" ]
any-to-any
2025-05-30T07:40:46Z
--- license: mit tags: - any-to-any - omega - omegalabs - bittensor - agi --- This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet. Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
georgeiac00/dpo_peft_v1
georgeiac00
2025-05-30T07:47:07Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-05-30T07:46:49Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
AmberYifan/Llama-3.1-8B-sft-SPIN-gpt4o-beta0.8-lr1e-7
AmberYifan
2025-05-30T07:46:20Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "generated_from_trainer", "trl", "dpo", "conversational", "arxiv:2305.18290", "base_model:AmberYifan/Llama-3.1-8B-sft-ultrachat-safeRLHF", "base_model:finetune:AmberYifan/Llama-3.1-8B-sft-ultrachat-safeRLHF", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-30T07:25:54Z
--- base_model: AmberYifan/Llama-3.1-8B-sft-ultrachat-safeRLHF library_name: transformers model_name: Llama-3.1-8B-sft-SPIN-gpt4o-beta0.8-lr1e-7 tags: - generated_from_trainer - trl - dpo licence: license --- # Model Card for Llama-3.1-8B-sft-SPIN-gpt4o-beta0.8-lr1e-7 This model is a fine-tuned version of [AmberYifan/Llama-3.1-8B-sft-ultrachat-safeRLHF](https://huggingface.co/AmberYifan/Llama-3.1-8B-sft-ultrachat-safeRLHF). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="AmberYifan/Llama-3.1-8B-sft-SPIN-gpt4o-beta0.8-lr1e-7", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/yifanwang/huggingface/runs/ooli2zcv) This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290). ### Framework versions - TRL: 0.12.2 - Transformers: 4.46.3 - Pytorch: 2.7.0 - Datasets: 3.6.0 - Tokenizers: 0.20.3 ## Citations Cite DPO as: ```bibtex @inproceedings{rafailov2023direct, title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}}, author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn}, year = 2023, booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023}, url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html}, editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
robotgeneralist/openpi-nomagic
robotgeneralist
2025-05-30T07:45:37Z
0
0
null
[ "license:mit", "region:us" ]
null
2025-04-14T18:14:58Z
--- license: mit --- # Nomagic Simple / Adversarial Box Model Checkpoints This is a repo to store the most important checkpoints of the `openpi` model. ## Uploading checkpoints Since the checkpoints are huge, the fastest and most reliable way to upload them is by using the `upload-large-folder` command from `huggingface-cli`. To do so, you first have to log in with appropriate credentials (you need a token with write permissions to the target repository): ``` huggingface-cli login ``` Next, use `upload-large-folder`. For example, to upload the `checkpoints` directory to the remote repository, run: ``` huggingface-cli upload-large-folder robotgeneralist/openpi-nomagic-multibox checkpoints --repo-type=model ``` Note that there is no way to specify a target path where the data will be stored on the remote. The contents of the directory will be placed under the root directory. So, for example, if your local folder is organized like the following: ``` checkpoints --some-dir --file1 --file2 ``` after uploading to the remote, you will have: ``` some-dir --file1 --file2 ``` Luckily, you can still upload additional files later on. For example, if after the first upload you try to upload: ``` checkpoints --some-dir --file3 --file4 ``` the remote will become: ``` some-dir --file1 --file2 --file3 --file4 ``` Hence, even though slightly inconvenient, this seems to be the best method for uploading big checkpoints, because of its efficiency and robustness.
ghostai1/ccengine1
ghostai1
2025-05-30T07:44:43Z
0
0
null
[ "region:us" ]
null
2025-03-12T01:36:58Z
--- license: mit title: Customer Experience Bot Demo sdk: gradio colorFrom: purple colorTo: green short_description: CX AI LLM ---# Mario AI Demo A sophisticated AI-powered demo of a Mario game environment, showcasing advanced gameplay mechanics and intelligent agent behaviors. Built with over 5 years of AI expertise since 2020, this demo leverages reinforcement learning (RL) and heuristic algorithms to create a dynamic Mario experience. Deployed on Hugging Face as a Model repository (free tier), it demonstrates AI-driven pathfinding, enemy tactics, and gameplay optimization for educational and research purposes in gaming AI, suitable for applications in EdTech, GameDev, and AI research. ## Technical Architecture ### AI Pathfinding and Gameplay Pipeline The core of this demo is a hybrid AI system combining reinforcement learning and rule-based heuristics to control Mario’s actions: - **Reinforcement Learning (RL) Agent**: - Utilizes a Proximal Policy Optimization (PPO) algorithm, fine-tuned on a custom Mario environment. - Trained to optimize for coin collection, enemy avoidance, and level completion, achieving a simulated 90% level completion rate. - Model size: Lightweight (~50MB), compatible with free-tier CPU deployment. - **Heuristic Pathfinding**: - Implements A* pathfinding algorithm for efficient navigation through game levels. - Incorporates dynamic obstacle avoidance (e.g., Goombas, Koopas) using real-time collision detection. - **Enemy Tactics**: - Enemies (e.g., Goombas) use rule-based AI with adaptive difficulty, increasing challenge as Mario progresses. - Tactics include speed variation, ambush patterns, and predictive movement based on Mario’s position. - **Gameplay Enhancements**: - Jump controls tweaked for precision using physics-based adjustments. - Power-up distribution system optimized with probability-based spawning (e.g., 20% chance for Super Mushroom). - Adaptive weather effects (e.g., rain, wind) impacting Mario’s movement and enemy behavior. ### Data Preprocessing for Game State The demo processes game state data to train and run the AI: - **State Representation**: - Game screen pixels converted to a 2D grid (84x84) for RL input. - Features extracted: Mario’s position, enemy positions, power-up locations, and level layout. - **Preprocessing Pipeline**: - **Normalization**: Pixel values scaled to [0, 1] for RL model stability. - **Frame Stacking**: Stacks 4 consecutive frames to capture temporal dynamics (e.g., Mario’s velocity). - **Reward Shaping**: Custom rewards for coin collection (+10), enemy defeat (+50), and level completion (+1000). - **Output**: Cleaned state data stored as `mario_states.csv` for training and inference. ### Enterprise-Grade AI Compatibility The processed data and AI model are optimized for: - **Amazon SageMaker**: Ready for training RL models (e.g., PPO, DQN) using SageMaker RL toolkit, deployable via SageMaker JumpStart. - **Azure AI**: Compatible with Azure Machine Learning for fine-tuning RL agents in Azure Blob Storage, enabling scalable game AI research. - **FastAPI Integration**: Designed for API-driven inference (e.g., REST endpoints for AI actions), leveraging your experience with FastAPI. ## Performance Monitoring and Visualization The demo includes a performance monitoring suite: - **Latency Tracking**: Measures pathfinding, enemy decision-making, and gameplay update times using `time.perf_counter()`, reported in milliseconds. - **Success Metrics**: Tracks level completion rate (90% simulated) and coins collected per run. - **Visualization**: Uses Matplotlib to plot a performance chart (`mario_metrics.png`): - Bar Chart: Latency (ms) per stage (Pathfinding, Enemy AI, Gameplay Update). - Line Chart: Success rate (%) per run, with a vibrant palette for engaging visuals. ## Gradio Interface for Interactive Demo The demo is accessible via Gradio, providing an interactive Mario AI experience: - **Input**: Select a level (e.g., "Level 1-1") and AI mode (e.g., "Exploration", "Speedrun"). - **Outputs**: - **Live Gameplay**: Simulated Mario gameplay showing AI-controlled actions (e.g., jumps, enemy avoidance). - **Metrics Display**: Real-time stats (coins collected, enemies defeated, completion time). - **Performance Plot**: Visual metrics for latency and success rate. - **Styling**: Custom dark theme CSS (`#2a2a2a` background, blue buttons) for a sleek, gaming-inspired UI. ## Setup - Clone this repository to a Hugging Face Model repository (free tier, public). - Add `requirements.txt` with dependencies (`gradio==4.44.0`, `matplotlib==3.9.2`, etc.). - Upload `app.py` (includes embedded game environment for seamless deployment). - Configure to run with Python 3.9+, CPU hardware (no GPU). ## Usage - **Select Level**: Choose a Mario level in the Gradio UI (e.g., "Level 1-1"). - **Select AI Mode**: Pick an AI behavior mode (e.g., "Exploration" for coin collection, "Speedrun" for fastest completion). - **Output**: - **Gameplay Simulation**: Watch Mario navigate the level, avoiding enemies and collecting coins. - **Metrics**: “Coins: 15, Enemies Defeated: 3, Completion Time: 45s”. - **Performance Plot**: Visual metrics for latency and success rate. **Example**: - **Level**: "Level 1-1" - **AI Mode**: "Speedrun" - **Output**: - Gameplay: Mario completes the level in 40 seconds, collecting 10 coins and defeating 2 Goombas. - Metrics: “Coins: 10, Enemies Defeated: 2, Completion Time: 40s”. - Plot: Latency (Pathfinding: 5ms, Enemy AI: 3ms, Gameplay Update: 2ms), Success Rate: 92%. ## Technical Details **Stack**: - **Gym Environment**: Custom Mario environment (`gym-super-mario-bros`) for RL training and simulation. - **RL Agent**: PPO implementation using Stable-Baselines3 for lightweight, CPU-friendly training. - **Pathfinding**: A* algorithm with dynamic obstacle avoidance. - **Gradio**: Interactive UI for real-time gameplay demos. - **Matplotlib**: Performance visualization with bar and line charts. - **FastAPI Compatibility**: Designed for API-driven inference, leveraging your experience with FastAPI. **Free Tier Optimization**: Lightweight with CPU-only dependencies, no GPU required. **Extensibility**: Ready for integration with game engines (e.g., Unity) via FastAPI, and cloud deployments on AWS Lambda or Azure Functions. ## Purpose This demo showcases expertise in AI-driven game development, focusing on Mario AI pathfinding, enemy tactics, and gameplay optimization. Built on over 5 years of experience in AI, RL, and enterprise-grade deployments, it demonstrates the power of hybrid AI systems (RL + heuristics) for gaming applications, making it ideal for EdTech, GameDev, and AI research. ## Future Enhancements - **LLM Integration**: Incorporate lightweight LLMs (e.g., distilgpt2) for dynamic NPC dialogue generation. - **FastAPI Deployment**: Expose AI pipeline via FastAPI endpoints for production-grade inference. - **Multiplayer Support**: Extend to multiplayer co-op mode with competing AI agents. - **Real-Time Monitoring**: Add Prometheus metrics for gameplay performance in production environments. **Website**: https://ghostainews.com/ **Discord**: https://discord.gg/BfA23aYz ## Latest Update **Status Update**: Status Update: Optimized collision detection for smoother interactions - May 28, 2025 📝 - Integrated new collectible items for bonus challenges ⚡ - May 30, 2025 📝 - Enhanced NPC dialogue with dynamic responses 🔥 - May 29, 2025 📝 - Optimized collision detection for smoother interactions - Upgraded power-up distribution system 🎩 - Introduced adaptive weather in game levels 🪙 - Tweaked jump controls for improved accuracy 🍄 - Added fresh enemy tactics for extra difficulty - Refined AI pathfinding for seamless gameplay 🌈 - Added support for multiplayer co-op mode 🎩 - Improved level loading times by 30% ✨ - Integrated new collectible items for bonus challenges 🍄 - Enhanced NPC dialogue with dynamic responses 🌈 - Optimized collision detection for smoother interactions - Upgraded power-up distribution system 🪙 - Introduced adaptive weather in game levels - Tweaked jump controls for improved accuracy - Added fresh enemy tactics for extra difficulty - Refined AI pathfinding for seamless gameplay 🔥 - Added support for multiplayer co-op mode 🎉 - Improved level loading times by 30% - Integrated new collectible items for bonus challenges - Enhanced NPC dialogue with dynamic responses ⭐ - Optimized collision detection for smoother interactions - Upgraded power-up distribution system - Introduced adaptive weather in game levels - Tweaked jump controls for improved accuracy - Added fresh enemy tactics for extra difficulty - Refined AI pathfinding for seamless gameplay - Added support for multiplayer co-op mode - Improved level loading times by 30% - Integrated new collectible items for bonus challenges ⚡ - Enhanced NPC dialogue with dynamic responses 🏰 - Optimized collision detection for smoother interactions - Upgraded power-up distribution system - Introduced adaptive weather in game levels - Tweaked jump controls for improved accuracy - Added fresh enemy tactics for extra difficulty
Mridul2003/identity-hate-detector
Mridul2003
2025-05-30T07:42:46Z
2
0
null
[ "safetensors", "bert", "base_model:unitary/toxic-bert", "base_model:finetune:unitary/toxic-bert", "region:us" ]
null
2025-05-28T08:26:04Z
--- metrics: - accuracy base_model: - unitary/toxic-bert --- Use Model ```bash from transformers import pipeline, AutoTokenizer, AutoModelForSequenceClassification device = torch.device("cuda" if torch.cuda.is_available() else "cpu") identity_model = AutoModelForSequenceClassification.from_pretrained("Mridul2003/identity-hate-detector").to(device) identity_tokenizer = AutoTokenizer.from_pretrained("Mridul2003/identity-hate-detector") identity_inputs = identity_tokenizer(final_text, return_tensors="pt", padding=True, truncation=True) if 'token_type_ids' in identity_inputs: del identity_inputs['token_type_ids'] identity_inputs = {k: v.to(device) for k, v in identity_inputs.items()} with torch.no_grad(): identity_outputs = identity_model(**identity_inputs) identity_probs = torch.sigmoid(identity_outputs.logits) identity_prob = identity_probs[0][1].item() not_identity_prob = identity_probs[0][0].item() results["identity_hate_custom"] = identity_prob results["not_identity_hate_custom"] = not_identity_prob ``` # Offensive Language Classifier (Fine-Tuned on Custom Dataset) This repository contains a fine-tuned version of the [`unitary/toxic-bert`](https://huggingface.co/unitary/toxic-bert) model for binary classification of offensive language (labels: `Offensive` vs `Not Offensive`). The model has been specifically fine-tuned on a custom dataset due to limitations observed in the base model's performance — particularly with `identity_hate` related content. --- ## 🔍 Problem with Base Model (`unitary/toxic-bert`) The original `unitary/toxic-bert` model is trained for multi-label toxicity detection with 6 categories: - toxic - severe toxic - obscene - threat - insult - identity_hate While it performs reasonably well on generic toxicity, **it struggles with edge cases involving identity-based hate speech** — often: - Misclassifying subtle or sarcastic identity attacks - Underestimating offensive content with identity-specific slurs --- ## ✅ Why Fine-Tune? We fine-tuned the model on a custom annotated dataset with two clear labels: - `0`: Not Identity Hate - `1`: Identity Hate The new model simplifies the task into a **binary classification problem**, allowing more focused training for real-world moderation scenarios. --- ## 📊 Dataset Overview - Total examples: ~4,000+ - Balanced between offensive and non-offensive labels - Contains high proportions of `identity_hate`, `obscene`, `insult`, and more nuanced samples --- ## 🧠 Model Details - **Base model**: [`unitary/toxic-bert`](https://huggingface.co/unitary/toxic-bert) - **Fine-tuned using**: Hugging Face 🤗 `Trainer` API - **Loss function**: CrossEntropyLoss (via `num_labels=2`) - **Batch size**: 8 - **Epochs**: 3 - **Learning rate**: 2e-5 --- ## 🔬 Performance (Binary Classification) | Metric | Value | |----------|---------| | Accuracy | ~92% | | Precision / Recall | Balanced | ---
BSC-LT/salamandraTA-7b-instruct
BSC-LT
2025-05-30T07:42:26Z
1,448
11
transformers
[ "transformers", "safetensors", "llama", "text-generation", "translation", "bg", "ca", "cs", "cy", "da", "de", "el", "en", "es", "et", "eu", "fi", "fr", "ga", "gl", "hr", "hu", "it", "lt", "lv", "mt", "nl", "nb", "no", "nn", "oc", "pl", "pt", "ro", "ru", "sl", "sk", "sr", "sv", "uk", "ast", "an", "arxiv:2010.11125", "arxiv:2403.14009", "arxiv:1907.05791", "arxiv:1911.04944", "arxiv:2402.17733", "arxiv:2207.04672", "arxiv:2404.06392", "arxiv:2309.04662", "base_model:BSC-LT/salamandra-7b", "base_model:finetune:BSC-LT/salamandra-7b", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:eu" ]
translation
2025-01-08T15:02:52Z
--- license: apache-2.0 library_name: transformers pipeline_tag: translation language: - bg - ca - cs - cy - da - de - el - en - es - et - eu - fi - fr - ga - gl - hr - hu - it - lt - lv - mt - nl - nb - 'no' - nn - oc - pl - pt - ro - ru - sl - sk - sr - sv - uk - ast - an base_model: - BSC-LT/salamandra-7b --- ![](./images/salamandra_header.png) # SalamandraTA Model Card SalamandraTA-7b-instruct is a translation LLM that has been instruction-tuned from SalamandraTA-7b-base. The base model results from continually pre-training [Salamandra-7b](https://huggingface.co/BSC-LT/salamandra-7b) on parallel data and has not been published, but is reserved for internal use. SalamandraTA-7b-instruct is proficient in 35 European languages (plus 3 varieties) and supports translation-related tasks, namely: sentence-level-translation, paragraph-level-translation, document-level-translation, automatic post-editing, grammar checking, machine translation evaluation, alternative translations, named-entity-recognition and context-aware translation. > [!WARNING] > **DISCLAIMER:** This version of Salamandra is tailored exclusively for translation tasks. It lacks chat capabilities and has not been trained with any chat instructions. --- ## Model Details ### Description SalamandraTA-7b-base is a continual pre-training of [Salamandra-7b](https://huggingface.co/BSC-LT/salamandra-7b) using parallel data, resulting in a total of 424B tokens processed during training. ### Architecture | | | |-------------------------|:--------------| | Total Parameters | 7,768,117,248 | | Embedding Parameters | 1,048,576,000 | | Layers | 32 | | Hidden size | 4,096 | | Attention heads | 32 | | Context length | 8,192 | | Vocabulary size | 256,000 | | Precision | bfloat16 | | Embedding type | RoPE | | Activation Function | SwiGLU | | Layer normalization | RMS Norm | | Flash attention | ✅ | | Grouped Query Attention | ✅ | | Num. query groups | 8 | --- ## Intended Use ### Direct Use The model is intended for both research and commercial use in any of the languages included in the training data for general machine translation tasks. ### Out-of-scope Use The model is not intended for malicious activities, such as harming others or violating human rights. Any downstream application must comply with current laws and regulations. Irresponsible usage in production environments without proper risk assessment and mitigation is also discouraged. --- ## Hardware and Software ### Training Framework SalamandraTA-7b-base was continually pre-trained using NVIDIA’s [NeMo Framework](https://docs.nvidia.com/nemo-framework/index.html), which leverages PyTorch Lightning for efficient model training in highly distributed settings. SalamandraTA-7b-instruct was produced with [FastChat](https://github.com/lm-sys/FastChat). ### Compute Infrastructure All models were trained on [MareNostrum 5](https://www.bsc.es/ca/marenostrum/marenostrum-5), a pre-exascale EuroHPC supercomputer hosted and operated by Barcelona Supercomputing Center. The accelerated partition is composed of 1,120 nodes with the following specifications: - 4x Nvidia Hopper GPUs with 64GB HBM2 memory - 2x Intel Sapphire Rapids 8460Y+ at 2.3Ghz and 32c each (64 cores) - 4x NDR200 (BW per node 800Gb/s) - 512 GB of Main memory (DDR5) - 460GB on NVMe storage --- ## How to use You can translate between the following 35 languages (and 3 varieties): Aragonese, Asturian, Basque, Bulgarian, Catalan and Valencian variety, Croatian, Czech, Danish, Dutch, English, Estonian, Finnish, French, Galician, German, Greek, Hungarian, Irish, Italian, Latvian, Lithuanian, Maltese, Norwegian (Bokmål and Nynorsk varieties), Occitan and Aranese variety, Polish, Portuguese, Romanian, Russian, Serbian, Slovak, Slovenian, Spanish, Swedish, Ukrainian, Welsh. The instruction-following model uses the commonly adopted ChatML template: ``` <|im_start|>system {SYSTEM PROMPT}<|im_end|> <|im_start|>user {USER PROMPT}<|im_end|> <|im_start|>assistant {MODEL RESPONSE}<|im_end|> <|im_start|>user [...] ``` The easiest way to apply it is by using the tokenizer's built-in functions, as shown in the following snippet. ```python from datetime import datetime from transformers import AutoTokenizer, AutoModelForCausalLM import transformers import torch model_id = "BSC-LT/salamandraTA-7b-instruct" source = 'Spanish' target = 'Catalan' sentence = "Ayer se fue, tomó sus cosas y se puso a navegar. Una camisa, un pantalón vaquero y una canción, dónde irá, dónde irá. Se despidió, y decidió batirse en duelo con el mar. Y recorrer el mundo en su velero. Y navegar, nai-na-na, navegar" text = f"Translate the following text from {source} into {target}.\n{source}: {sentence} \n{target}:" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained( model_id, device_map="auto", torch_dtype=torch.bfloat16 ) message = [ { "role": "user", "content": text } ] date_string = datetime.today().strftime('%Y-%m-%d') prompt = tokenizer.apply_chat_template( message, tokenize=False, add_generation_prompt=True, date_string=date_string ) inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt") input_length = inputs.shape[1] outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=400, early_stopping=True, num_beams=5) print(tokenizer.decode(outputs[0, input_length:], skip_special_tokens=True)) # Ahir se'n va anar, va recollir les seves coses i es va fer a la mar. Una camisa, uns texans i una cançó, on anirà, on anirà. Es va acomiadar i va decidir batre's en duel amb el mar. I fer la volta al món en el seu veler. I navegar, nai-na-na, navegar ``` Using this template, each turn is preceded by a `<|im_start|>` delimiter and the role of the entity (either `user`, for content supplied by the user, or `assistant` for LLM responses), and finished with the `<|im_end|>` token. #### General translation For machine translation tasks, you can use the following prompt template: ``` Translate the following text from {source} into {target}. {source}: {source sentence} {target}: ``` <details> <summary>Show an example</summary> ```python source = 'Catalan' target = 'Galician' source_sentence = "Als antics egipcis del període de l'Imperi Nou els fascinaven els monuments dels seus predecessors, que llavors tenien més de mil anys." text = f"Translate the following text from {source} into {target}.\n{source}: {source_sentence} \n{target}:" # Os antigos exipcios do período do Imperio Novo estaban fascinados polos monumentos dos seus predecesores, que entón tiñan máis de mil anos de antigüidade. ``` </details> ### Post-editing For post-editing tasks, you can use the following prompt template: ``` Please fix any mistakes in the following {source}-{target} machine translation or keep it unedited if it's correct. Source: {source_sentence} MT: {machine_translation} Corrected:" ``` <details> <summary>Show an example</summary> ```python source = 'Catalan' target = 'English' source_sentence = 'Rafael Nadal i Maria Magdalena van inspirar a una generació sencera.' machine_translation = 'Rafael Christmas and Maria the Muffin inspired an entire generation each in their own way.' text = f"Please fix any mistakes in the following {source}-{target} machine translation or keep it unedited if it's correct.\nSource: {source_sentence} \nMT: {machine_translation} \nCorrected:" # Rafael Nadal and Maria Magdalena inspired an entire generation. ``` </details> ### Document-level translation For document-level translation tasks, you can use the following prompt template: ``` Please translate this text from {source} into {target}. {source}: {1st paragraph of the document} {2nd paragraph of the document} {Nth paragraph of the document} {target}: ``` <details> <summary>Show an example</summary> ```python source = 'English' target = 'Asturian' text = """Please translate this text from {} into {}.\n{}: President Donald Trump, who campaigned on promises to crack down on illegal immigration, has raised alarms in the U.S. dairy industry with his threat to impose 25% tariffs on Mexico and Canada by February 2025. This move is part of a broader strategy to declare a national emergency at the southern border to halt illegal migration completely. However, the implications for the agriculture sector, particularly dairy, are significant. Approximately half of the U.S. dairy industry's workforce consists of immigrant labor, many of whom are undocumented. The National Milk Producers Federation estimates that removing immigrant workers could decimate the dairy herd by 2.1 million cows and slash milk production by nearly 50 billion pounds, leading to a dramatic 90.4% increase in milk prices. The complex perspectives of Americans on undocumented workers were highlighted in a Pew Research Center study. While 64% of U.S. adults support legal pathways for undocumented immigrants, 35% oppose it—a gap that has been narrowing recently. Factors influencing public opinion include the belief that immigrants should have jobs and pass security checks, contrasted by concerns about lawbreakers being rewarded, fairness for legal migrants, and resource allocation. According to Zach Rutledge, an agricultural economist at Michigan State University, as nations grow wealthier, their labor forces transition away from agriculture toward sectors like services and manufacturing. This shift has led to the U.S. relying heavily on immigrant labor for agricultural work. Domestic workers, even with employment taxes, may cost $15 to $25 an hour, while H-2A visa program workers might cost $25 to $30 an hour, accounting for additional housing expenses. The National Milk Producers Federation has been vocal in advocating for changes to the H-2A visa program, which outside of its current seasonal limitations, does not support the dairy industry's year-round labor needs. Executive vice-president Jaime Castaneda reiterated the need for legislative clarity to address the undocumented workforce issues in dairy farming. The Farm Workforce Modernization Act of 2023, which could grant legal status to certain undocumented farmworkers, has been stalled in Congress, despite acknowledgment of the sector's importance to feeding America. The need for coordinated legislative efforts to ensure both border security and labor market stability is imperative moving forward. {}:""".format(source, target, source, target) ``` </details> ### Named-entity recognition For named-entity recognition tasks, you can use the following prompt template: ``` Analyse the following tokenized text and mark the tokens containing named entities. Use the following annotation guidelines with these tags for named entities: - ORG (Refers to named groups or organizations) - PER (Refers to individual people or named groups of people) - LOC (Refers to physical places or natural landmarks) - MISC (Refers to entities that don't fit into standard categories). Prepend B- to the first token of a given entity and I- to the remaining ones if they exist. If a token is not a named entity, label it as O. Input: {list of words in a sentence} Marked: ``` <details> <summary>Show an example</summary> ```python text = """Analyse the following tokenized text and mark the tokens containing named entities. Use the following annotation guidelines with these tags for named entities: - ORG (Refers to named groups or organizations) - PER (Refers to individual people or named groups of people) - LOC (Refers to physical places or natural landmarks) - MISC (Refers to entities that don't fit into standard categories). Prepend B- to the first token of a given entity and I- to the remaining ones if they exist. If a token is not a named entity, label it as O. Input: ['La', 'defensa', 'del', 'antiguo', 'responsable', 'de', 'la', 'RFEF', 'confirma', 'que', 'interpondrá', 'un', 'recurso.'] Marked: """ # [('La', 'O'), ('defensa', 'O'), ('del', 'O'), ('antiguo', 'O'), ('responsable', 'O'), ('de', 'O'), ('la', 'O'), ('RFEF', 'B-ORG'), ('confirma', 'O'), ('que', 'O'), ('interpondrá', 'O'), ('un', 'O'), ('recurso.', 'O')] ``` </details> ### Grammar checker For fixing any mistakes in grammar, you can use the following prompt template: ``` Please fix any mistakes in the following {source} sentence or keep it unedited if it's correct. Sentence: {sentence} Corrected: ``` <details> <summary>Show an example</summary> ```python source = 'Catalan' sentence = 'Entonses, el meu jefe m’ha dit que he de treballar els fins de setmana.' text = f"Please fix any mistakes in the following {source} sentence or keep it unedited if it's correct.\nSentence: {sentence} \nCorrected:" # Llavors, el meu cap m'ha dit que he de treballar els caps de setmana. ``` </details> ## Data ### Pretraining Data The pretraining corpus consists of 424 billion tokens of Catalan-centric, Spanish-centric, and English-centric parallel data, including all of the official European languages plus Catalan, Basque, Galician, Asturian, Aragonese and Aranese. It amounts to 6,574,251,526 parallel sentence pairs. This highly multilingual corpus is predominantly composed of data sourced from [OPUS](https://opus.nlpl.eu/), with additional data taken from the [NTEU Project](https://nteu.eu/), [Aina Project](https://projecteaina.cat/), and other sources (see: [Data Sources](#pre-data-sources) and [References](#pre-references)). Where little parallel Catalan <-> xx data could be found, synthetic Catalan data was generated from the Spanish side of the collected Spanish <-> xx corpora using [Projecte Aina’s Spanish-Catalan model](https://huggingface.co/projecte-aina/aina-translator-es-ca). The final distribution of languages was as below: ![](./images/treemap.png) Click the expand button below to see the full list of corpora included in the training data. <details id="pre-data-sources"> <summary>Data Sources</summary> | Dataset | Ca-xx Languages | Es-xx Langugages | En-xx Languages | |-----------------------------------------------|----------------------------------------------------------------|-----------------------------------------------|----------------------------------------------------------------| |[AINA](https://huggingface.co/projecte-aina) | en | | | |ARANESE-SYNTH-CORPUS-BSC | arn | | | |BOUA-SYNTH-BSC | | val | | |[BOUMH](https://github.com/transducens/PILAR/tree/main/valencian/BOUMH) | | val | | |[BOUA-PILAR](https://github.com/transducens/PILAR/tree/main/valencian/BOUA) | | val | | |[CCMatrix](https://opus.nlpl.eu/CCMatrix/corpus/version/CCMatrix) |eu | | ga | |[DGT](https://opus.nlpl.eu/DGT/corpus/version/DGT) | |bg,cs,da,de,el ,et,fi,fr,ga,hr,hu,lt,lv,mt,nl,pl,pt,ro,sk,sl,sv | da,et,ga,hr,hu,lt,lv,mt,sh,sl| |DOGV-SYNTH-BSC | | val | | |[DOGV-PILAR](https://github.com/transducens/PILAR/tree/main/valencian/DOGV-html) | | val | | |[ELRC-EMEA](https://opus.nlpl.eu/ELRC-EMEA/corpus/version/ELRC-EMEA) | |bg,cs,da,hu,lt,lv,mt,pl,ro,sk,sl | et,hr,lv,ro,sk,sl | |[EMEA](https://opus.nlpl.eu/EMEA/corpus/version/EMEA) | |bg,cs,da,el,fi,hu,lt,mt,nl,pl,ro,sk,sl,sv | et,mt | |[EUBookshop](https://opus.nlpl.eu/EUbookshop/corpus/version/EUbookshop) |lt,pl,pt |cs,da,de,el,fi,fr,ga,it,lv,mt,nl,pl,pt,ro,sk,sl,sv |cy,ga| |[Europarl](https://opus.nlpl.eu/Europarl/corpus/version/Europarl) | |bg,cs,da,el,en,fi,fr,hu,lt,lv,nl,pl,pt ,ro,sk,sl,sv | | |[Europat](https://opus.nlpl.eu/EuroPat/corpus/version/EuroPat) | |en,hr | no | |[GAITU Corpus](https://gaitu.eus/) | | | eu| |[KDE4](https://opus.nlpl.eu/KDE4/corpus/version/KDE4) |bg,cs,da,de,el ,et,eu,fi,fr,ga,gl,hr,it,lt,lv,nl,pl,pt,ro,sk,sl,sv |bg,ga,hr |cy,ga,nn,oc | |[GlobalVoices](https://opus.nlpl.eu/GlobalVoices/corpus/version/GlobalVoices) | bg,de,fr,it,nl,pl,pt |bg,de,fr,pt | | |[GNOME](https://opus.nlpl.eu/GNOME/corpus/version/GNOME) |eu,fr,ga,gl,pt |ga |cy,ga,nn| |[JRC-Arquis](https://opus.nlpl.eu/JRC-Acquis/corpus/version/JRC-Acquis) | |cs,da,et,fr,lt,lv,mt,nl,pl ,ro,sv| et | |LES-CORTS-VALENCIANES-SYNTH-BSC | | val | | |[MaCoCu](https://opus.nlpl.eu/MaCoCu/corpus/version/MaCoCu) | en | | hr,mt,uk | |[MultiCCAligned](https://opus.nlpl.eu/JRC-Acquis/corpus/version/JRC-Acquis) |bg,cs,de,el,et,fi,fr,hr,hu,it,lt,lv,nl,pl,ro,sk,sv |bg,fi,fr,hr,it,lv,nl,pt |bg,cy,da,et,fi,hr,hu,lt,lv,no,sl,sr,uk| |[MultiHPLT](https://opus.nlpl.eu/MultiHPLT/corpus/version/MultiHPLT) |en, et,fi,ga,hr,mt | |fi,ga,gl,hr,mt,nn,sr | |[MultiParaCrawl](https://opus.nlpl.eu/MultiParaCrawl/corpus/version/MultiParaCrawl) |bg,da |de,en,fr,ga,hr,hu,it,mt,pt |bg,cs,da,de,el,et,fi,fr,ga,hr,hu,lt,lv,mt,nn,pl,ro,sk,sl,uk| |[MultiUN](https://opus.nlpl.eu/MultiUN/corpus/version/MultiUN) | |fr | | |[News-Commentary](https://opus.nlpl.eu/News-Commentary/corpus/version/News-Commentary) | |fr | | |[NLLB](https://opus.nlpl.eu/NLLB/corpus/version/NLLB) |bg,da,el,en,et,fi,fr,gl,hu,it ,lt,lv,pt,ro,sk,sl |bg,cs,da,de,el ,et,fi,fr,hu,it,lt,lv,nl,pl,pt ,ro,sk,sl,sv| bg,cs,cy,da,de,el,et,fi,fr,ga,hr,hu,it,lt,lv,mt,nl,no,oc,pl,pt,ro,ru,sk,sl,sr,sv,uk| |[NÓS Authentic Corpus](https://zenodo.org/records/7675110) | | | gl | |[NÓS Synthetic Corpus](https://zenodo.org/records/7685180) | | | gl | |[NTEU](https://www.elrc-share.eu/repository/search/?q=NTEU) | |bg,cs,da,de,el,en,et,fi,fr,ga,hr,hu,it,lt,lv,mt,nl,pl,pt,ro,sk,sl,sv | da,et,ga,hr,lt,lv,mt,ro,sk,sl,sv | |[OpenSubtitles](https://opus.nlpl.eu/OpenSubtitles/corpus/version/OpenSubtitles) |bg,cs,da,de,el ,et,eu,fi,gl,hr,hu,lt,lv,nl,pl,pt,ro,sk,sl,sv |da,de,fi,fr,hr,hu,it,lv,nl | bg,cs,de,el,et,hr,fi,fr,hr,hu,no,sl,sr| |[OPUS-100](https://opus.nlpl.eu/opus-100.php) | en | | gl | |[StanfordNLP-NMT](https://opus.nlpl.eu/StanfordNLP-NMT/corpus/version/StanfordNLP-NMT) | | |cs | |[Tatoeba](https://opus.nlpl.eu/Tatoeba/corpus/version/Tatoeba) |de,pt |pt | | |[TildeModel](https://opus.nlpl.eu/TildeMODEL/corpus/version/TildeMODEL) | |bg | et,hr,lt,lv,mt | |[UNPC](https://opus.nlpl.eu/UNPC/corpus/version/UNPC) | |en,fr | ru | |[PILAR-VALENCIAN-AUTH](https://github.com/transducens/PILAR/tree/main/valencian/Generalitat) | | val | | |[PILAR-VALENCIAN-SYNTH](https://github.com/transducens/PILAR/tree/main/valencian/Generalitat) | | val | | |[WikiMatrix](https://opus.nlpl.eu/WikiMatrix/corpus/version/WikiMatrix) |bg,cs,da,de,el ,et,eu,fi,fr,gl,hr,hu,it,lt,nl,pl,pt,ro,sk,sl,sv |bg,en,fr,hr,it,pt | oc,sh | |[Wikimedia](https://opus.nlpl.eu/wikimedia/corpus/version/wikimedia) | | |cy,nn | |[XLENT](https://opus.nlpl.eu/XLEnt/corpus/version/XLEnt) |eu,ga,gl |ga |cy,et,ga,gl,hr,oc,sh| Datasets with "-BSC" in their names (e.g., BOUA-SYNTH-BSC, DOGV-SYNTH-BSC) are synthetic datasets obtained by machine translating pre-existing monolingual corpora with our own seq-to-seq models. These datasets were generated internally for model training and are not published. To consult the data summary document with the respective licences, please send an e-mail to [email protected]. </details> <details id="pre-references"> <summary>References</summary> - Aulamo, M., Sulubacak, U., Virpioja, S., & Tiedemann, J. (2020). OpusTools and Parallel Corpus Diagnostics. In N. Calzolari, F. Béchet, P. Blache, K. Choukri, C. Cieri, T. Declerck, S. Goggi, H. Isahara, B. Maegaard, J. Mariani, H. Mazo, A. Moreno, J. Odijk, & S. Piperidis (Eds.), Proceedings of the Twelfth Language Resources and Evaluation Conference (pp. 3782–3789). European Language Resources Association. https://aclanthology.org/2020.lrec-1.467 - Chaudhary, V., Tang, Y., Guzmán, F., Schwenk, H., & Koehn, P. (2019). Low-Resource Corpus Filtering Using Multilingual Sentence Embeddings. In O. Bojar, R. Chatterjee, C. Federmann, M. Fishel, Y. Graham, B. Haddow, M. Huck, A. J. Yepes, P. Koehn, A. Martins, C. Monz, M. Negri, A. Névéol, M. Neves, M. Post, M. Turchi, & K. Verspoor (Eds.), Proceedings of the Fourth Conference on Machine Translation (Volume 3: Shared Task Papers, Day 2) (pp. 261–266). Association for Computational Linguistics. https://doi.org/10.18653/v1/W19-5435 - DGT-Translation Memory—European Commission. (n.d.). Retrieved November 4, 2024, from https://joint-research-centre.ec.europa.eu/language-technology-resources/dgt-translation-memory_en - Eisele, A., & Chen, Y. (2010). MultiUN: A Multilingual Corpus from United Nation Documents. In N. Calzolari, K. Choukri, B. Maegaard, J. Mariani, J. Odijk, S. Piperidis, M. Rosner, & D. Tapias (Eds.), Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC’10). European Language Resources Association (ELRA). http://www.lrec-conf.org/proceedings/lrec2010/pdf/686_Paper.pdf - El-Kishky, A., Chaudhary, V., Guzmán, F., & Koehn, P. (2020). CCAligned: A Massive Collection of Cross-Lingual Web-Document Pairs. Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), 5960–5969. https://doi.org/10.18653/v1/2020.emnlp-main.480 - El-Kishky, A., Renduchintala, A., Cross, J., Guzmán, F., & Koehn, P. (2021). XLEnt: Mining a Large Cross-lingual Entity Dataset with Lexical-Semantic-Phonetic Word Alignment. Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, 10424–10430. https://doi.org/10.18653/v1/2021.emnlp-main.814 - Fan, A., Bhosale, S., Schwenk, H., Ma, Z., El-Kishky, A., Goyal, S., Baines, M., Celebi, O., Wenzek, G., Chaudhary, V., Goyal, N., Birch, T., Liptchinsky, V., Edunov, S., Grave, E., Auli, M., & Joulin, A. (2020). Beyond English-Centric Multilingual Machine Translation (No. arXiv:2010.11125). arXiv. https://doi.org/10.48550/arXiv.2010.11125 - García-Martínez, M., Bié, L., Cerdà, A., Estela, A., Herranz, M., Krišlauks, R., Melero, M., O’Dowd, T., O’Gorman, S., Pinnis, M., Stafanovič, A., Superbo, R., & Vasiļevskis, A. (2021). Neural Translation for European Union (NTEU). 316–334. https://aclanthology.org/2021.mtsummit-up.23 - Gibert, O. de, Nail, G., Arefyev, N., Bañón, M., Linde, J. van der, Ji, S., Zaragoza-Bernabeu, J., Aulamo, M., Ramírez-Sánchez, G., Kutuzov, A., Pyysalo, S., Oepen, S., & Tiedemann, J. (2024). A New Massive Multilingual Dataset for High-Performance Language Technologies (No. arXiv:2403.14009). arXiv. http://arxiv.org/abs/2403.14009 - Koehn, P. (2005). Europarl: A Parallel Corpus for Statistical Machine Translation. Proceedings of Machine Translation Summit X: Papers, 79–86. https://aclanthology.org/2005.mtsummit-papers.11 - Kreutzer, J., Caswell, I., Wang, L., Wahab, A., Van Esch, D., Ulzii-Orshikh, N., Tapo, A., Subramani, N., Sokolov, A., Sikasote, C., Setyawan, M., Sarin, S., Samb, S., Sagot, B., Rivera, C., Rios, A., Papadimitriou, I., Osei, S., Suarez, P. O., … Adeyemi, M. (2022). Quality at a Glance: An Audit of Web-Crawled Multilingual Datasets. Transactions of the Association for Computational Linguistics, 10, 50–72. https://doi.org/10.1162/tacl_a_00447 - Rozis, R.,Skadiņš, R (2017). Tilde MODEL - Multilingual Open Data for EU Languages. https://aclanthology.org/W17-0235 - Schwenk, H., Chaudhary, V., Sun, S., Gong, H., & Guzmán, F. (2019). WikiMatrix: Mining 135M Parallel Sentences in 1620 Language Pairs from Wikipedia (No. arXiv:1907.05791). arXiv. https://doi.org/10.48550/arXiv.1907.05791 - Schwenk, H., Wenzek, G., Edunov, S., Grave, E., & Joulin, A. (2020). CCMatrix: Mining Billions of High-Quality Parallel Sentences on the WEB (No. arXiv:1911.04944). arXiv. https://doi.org/10.48550/arXiv.1911.04944 - Steinberger, R., Pouliquen, B., Widiger, A., Ignat, C., Erjavec, T., Tufiş, D., & Varga, D. (n.d.). The JRC-Acquis: A Multilingual Aligned Parallel Corpus with 20+ Languages. http://www.lrec-conf.org/proceedings/lrec2006/pdf/340_pdf - Subramani, N., Luccioni, S., Dodge, J., & Mitchell, M. (2023). Detecting Personal Information in Training Corpora: An Analysis. In A. Ovalle, K.-W. Chang, N. Mehrabi, Y. Pruksachatkun, A. Galystan, J. Dhamala, A. Verma, T. Cao, A. Kumar, & R. Gupta (Eds.), Proceedings of the 3rd Workshop on Trustworthy Natural Language Processing (TrustNLP 2023) (pp. 208–220). Association for Computational Linguistics. https://doi.org/10.18653/v1/2023.trustnlp-1.18 - Tiedemann, J. (23-25). Parallel Data, Tools and Interfaces in OPUS. In N. C. (Conference Chair), K. Choukri, T. Declerck, M. U. Doğan, B. Maegaard, J. Mariani, A. Moreno, J. Odijk, & S. Piperidis (Eds.), Proceedings of the Eight International Conference on Language Resources and Evaluation (LREC’12). European Language Resources Association (ELRA). http://www.lrec-conf.org/proceedings/lrec2012/pdf/463_Paper - Ziemski, M., Junczys-Dowmunt, M., & Pouliquen, B. (n.d.). The United Nations Parallel Corpus v1.0. https://aclanthology.org/L16-1561 </details> ### Instruction Tuning Data This model has been fine-tuned on ~135k instructions, primarily targeting machine translation performance for Catalan, English, and Spanish. Additional instruction data for other European and closely related Iberian languages was also included, as it yielded a positive impact on the languages of interest. That said, the performance in these additional languages is not guaranteed due to the limited amount of available data and the lack of resources for thorough testing. A portion of our fine-tuning data comes directly from, or is sampled from [TowerBlocks](https://huggingface.co/datasets/Unbabel/TowerBlocks-v0.2). We also created additional datasets for our main languages of interest. While tasks relating to machine translation are included, it’s important to note that no chat data was used in the fine-tuning process. The final distribution of tasks was as below: ![](./images/chart.png) Click the expand button below to see the full list of tasks included in the finetuning data. <details id="instr-data-sources"> <summary>Data Sources</summary> | Task | Source | Languages | Count | |----------------------------------|------------------------------------------------------------------------------------------|----------------------------------------------------------------|--------| | Multi-reference Translation | [TowerBlocks](https://huggingface.co/datasets/Unbabel/TowerBlocks-v0.2): [Tatoeba Dev (filtered)](https://github.com/Helsinki-NLP/Tatoeba-Challenge) | mixed | 10000 | | Paraphrase | [TowerBlocks](https://huggingface.co/datasets/Unbabel/TowerBlocks-v0.2): [PAWS-X Dev](https://github.com/google-research-datasets/paws) | mixed | 3521 | | Named-entity Recognition | [AnCora-Ca-NER](https://huggingface.co/datasets/projecte-aina/ancora-ca-ner) | ca | 12059 | | Named-entity Recognition | [BasqueGLUE](https://huggingface.co/datasets/orai-nlp/basqueGLUE), [EusIE](https://huggingface.co/datasets/HiTZ/EusIE) | eu | 4304 | | Named-entity Recognition | [SLI NERC Galician Gold Corpus](https://github.com/xavier-gz/SLI_Galician_Corpora) | gl | 6483 | | Named-entity Recognition | [TowerBlocks](https://huggingface.co/datasets/Unbabel/TowerBlocks-v0.2): [MultiCoNER 2022 and 2023 Dev](https://registry.opendata.aws/multiconer/) | pt | 854 | | Named-entity Recognition | [TowerBlocks](https://huggingface.co/datasets/Unbabel/TowerBlocks-v0.2): [MultiCoNER 2022 and 2023 Dev](https://registry.opendata.aws/multiconer/) | nl | 800 | | Named-entity Recognition | [TowerBlocks](https://huggingface.co/datasets/Unbabel/TowerBlocks-v0.2): [MultiCoNER 2022 and 2023 Dev](https://registry.opendata.aws/multiconer/) | es | 1654 | | Named-entity Recognition | [TowerBlocks](https://huggingface.co/datasets/Unbabel/TowerBlocks-v0.2): [MultiCoNER 2022 and 2023 Dev](https://registry.opendata.aws/multiconer/) | en | 1671 | | Named-entity Recognition | [TowerBlocks](https://huggingface.co/datasets/Unbabel/TowerBlocks-v0.2): [MultiCoNER 2022 and 2023 Dev](https://registry.opendata.aws/multiconer/) | ru | 800 | | Named-entity Recognition | [TowerBlocks](https://huggingface.co/datasets/Unbabel/TowerBlocks-v0.2): [MultiCoNER 2022 and 2023 Dev](https://registry.opendata.aws/multiconer/) | it | 858 | | Named-entity Recognition | [TowerBlocks](https://huggingface.co/datasets/Unbabel/TowerBlocks-v0.2): [MultiCoNER 2022 and 2023 Dev](https://registry.opendata.aws/multiconer/) | fr | 857 | | Named-entity Recognition | [TowerBlocks](https://huggingface.co/datasets/Unbabel/TowerBlocks-v0.2): [MultiCoNER 2022 and 2023 Dev](https://registry.opendata.aws/multiconer/) | de | 1312 | | Terminology-aware Translation | [TowerBlocks](https://huggingface.co/datasets/Unbabel/TowerBlocks-v0.2): [WMT21 Terminology Dev (filtered)](https://www.statmt.org/wmt21/terminology-task.html) | en-ru | 50 | | Terminology-aware Translation | [TowerBlocks](https://huggingface.co/datasets/Unbabel/TowerBlocks-v0.2): [WMT21 Terminology Dev (filtered)](https://www.statmt.org/wmt21/terminology-task.html) | en-fr | 29 | | Automatic Post Editing | [TowerBlocks](https://huggingface.co/datasets/Unbabel/TowerBlocks-v0.2): [QT21](https://lindat.mff.cuni.cz/repository/xmlui/handle/11372/LRT-2390), [ApeQuest](https://apequest.wordpress.com/) | en-fr | 6133 | | Automatic Post Editing | [TowerBlocks](https://huggingface.co/datasets/Unbabel/TowerBlocks-v0.2): [QT21](https://lindat.mff.cuni.cz/repository/xmlui/handle/11372/LRT-2390), [ApeQuest](https://apequest.wordpress.com/) | en-nl | 9077 | | Automatic Post Editing | [TowerBlocks](https://huggingface.co/datasets/Unbabel/TowerBlocks-v0.2): [QT21](https://lindat.mff.cuni.cz/repository/xmlui/handle/11372/LRT-2390), [ApeQuest](https://apequest.wordpress.com/) | en-pt | 5762 | | Automatic Post Editing | [TowerBlocks](https://huggingface.co/datasets/Unbabel/TowerBlocks-v0.2): [QT21](https://lindat.mff.cuni.cz/repository/xmlui/handle/11372/LRT-2390), [ApeQuest](https://apequest.wordpress.com/) | de-en | 10000 | | Automatic Post Editing | [TowerBlocks](https://huggingface.co/datasets/Unbabel/TowerBlocks-v0.2): [QT21](https://lindat.mff.cuni.cz/repository/xmlui/handle/11372/LRT-2390), [ApeQuest](https://apequest.wordpress.com/) | en-de | 10000 | | Machine Translation Evaluation | [TowerBlocks](https://huggingface.co/datasets/Unbabel/TowerBlocks-v0.2)-sample: [WMT20 to WMT22 Metrics MQM](https://www.statmt.org/wmt22/results.html), [WMT17 to WMT22 Metrics Direct Assessments](https://www.statmt.org/wmt22/results.html) | en-ru, en-pl, ru-en, en-de, en-ru, de-fr, de-en, en-de | 353 | | Machine Translation Evaluation | Non-public | four pivot languages (eu, es, ca, gl) paired with European languages (bg, cs, da, de, el, en, et, fi, fr, ga, hr, hu, it, lt, lv, mt, nl, pl, pt, ro, sk, sl, sv) | 9700 | | General Machine Translation | [TowerBlocks](https://huggingface.co/datasets/Unbabel/TowerBlocks-v0.2): [WMT14 to WMT21](https://www.statmt.org/wmt22/results.html), [NTREX](https://github.com/MicrosoftTranslator/NTREX), [Flores Dev](https://github.com/facebookresearch/flores), [FRMT](https://github.com/google-research/google-research/tree/master/frmt), [QT21](https://lindat.mff.cuni.cz/repository/xmlui/handle/11372/LRT-2390), [ApeQuest](https://apequest.wordpress.com/), [OPUS (Quality Filtered)](https://opus.nlpl.eu/), [MT-GenEval](https://github.com/amazon-science/machine-translation-gender-eval) | nl-en, en-ru, it-en, fr-en, es-en, en-fr, ru-en, fr-de, en-nl, de-fr | 500 | | General Machine Translation | Non-public | three pivot languages (es, ca, en) paired with European languages (ast, arn, arg, bg, cs, cy, da, de, el, et, fi, ga, gl, hr, it, lt, lv, mt, nb, nn, nl, oc, pl, pt, ro, ru, sk, sl, sr, sv, uk, eu) | 9350 | | Fill-in-the-Blank | Non-public | five pivot languages (ca, es, eu, gl, en) paired with European languages (cs, da, de, el, et, fi, fr, ga, hr, hu, it, lt, lv, mt, nl, pl, pt, ro, sk, sl, sv) | 11500 | | Document-level Translation | Non-public | two pivot languages (es, en) paired with European languages (bg, cs, da, de, el, et, fi, fr, hu, it, lt, lv, nl, pl, pt, ro, ru, sk, sv) | 7600 | | Paragraph-level Translation | Non-public | two pivot languages (es, en) paired with European languages (bg, cs, da, de, el, et, fi, fr, hu, it, lt, lv, nl, pl, pt, ro, ru, sk, sv) | 7600 | | Context-Aware Translation | [TowerBlocks](https://huggingface.co/datasets/Unbabel/TowerBlocks-v0.2): [MT-GenEval](https://github.com/amazon-science/machine-translation-gender-eval) | en-it | 348 | | Context-Aware Translation | [TowerBlocks](https://huggingface.co/datasets/Unbabel/TowerBlocks-v0.2): [MT-GenEval](https://github.com/amazon-science/machine-translation-gender-eval) | en-ru | 454 | | Context-Aware Translation | [TowerBlocks](https://huggingface.co/datasets/Unbabel/TowerBlocks-v0.2): [MT-GenEval](https://github.com/amazon-science/machine-translation-gender-eval) | en-fr | 369 | | Context-Aware Translation | [TowerBlocks](https://huggingface.co/datasets/Unbabel/TowerBlocks-v0.2): [MT-GenEval](https://github.com/amazon-science/machine-translation-gender-eval) | en-nl | 417 | | Context-Aware Translation | [TowerBlocks](https://huggingface.co/datasets/Unbabel/TowerBlocks-v0.2): [MT-GenEval](https://github.com/amazon-science/machine-translation-gender-eval) | en-es | 431 | | Context-Aware Translation | [TowerBlocks](https://huggingface.co/datasets/Unbabel/TowerBlocks-v0.2): [MT-GenEval](https://github.com/amazon-science/machine-translation-gender-eval) | en-de | 558 | |**Total** | | | **135,404** | The non-public portion of this dataset was jointly created by the [ILENIA](https://proyectoilenia.es/) partners: BSC-LT, [HiTZ](http://hitz.ehu.eus/es), and [CiTIUS](https://citius.gal/es/). For further information regarding the instruction-tuning data, please contact <[email protected]>. </details> <details id="instr-references"> <summary>References</summary> - Alves, D. M., Pombal, J., Guerreiro, N. M., Martins, P. H., Alves, J., Farajian, A., Peters, B., Rei, R., Fernandes, P., Agrawal, S., Colombo, P., de Souza, J. G. C., & Martins, A. F. T. (2024). Tower: An open multilingual large language model for translation-related tasks (No. arXiv: 2402.17733). arXiv. https://arxiv.org/abs/2402.17733 - Armengol-Estapé, J., Carrino, C. P., Rodriguez-Penagos, C., de Gibert Bonet, O., Armentano-Oller, C., Gonzalez-Agirre, A., Melero, M., & Villegas, M. (2021). Are multilingual models the best choice for moderately under-resourced languages? A comprehensive assessment for Catalan. Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, 4933–4946. Association for Computational Linguistics. https://doi.org/10.18653/v1/2021.findings-acl.437 - Currey, A., Nadejde, M., Pappagari, R. R., Mayer, M., Lauly, S., Niu, X., Hsu, B., & Dinu, G. (2022). MT-GenEval: A counterfactual and contextual dataset for evaluating gender accuracy in machine translation. In Y. Goldberg, Z. Kozareva, & Y. Zhang (Eds.), Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing (pp. 4287–4299). Association for Computational Linguistics. https://doi.org/10.18653/v1/2022.emnlp-main.288 - Federmann, C., Kocmi, T., & Xin, Y. (2022). NTREX-128 – News test references for MT evaluation of 128 languages. Proceedings of the First Workshop on Scaling Up Multilingual Evaluation, 21–24. Association for Computational Linguistics. https://aclanthology.org/2022.sumeval-1.4 - Ive, J., Specia, L., Szoc, S., Vanallemeersch, T., Van den Bogaert, J., Farah, E., Maroti, C., Ventura, A., & Khalilov, M. (2020). A post-editing dataset in the legal domain: Do we underestimate neural machine translation quality? In N. Calzolari, F. Béchet, P. Blache, K. Choukri, C. Cieri, T. Declerck, S. Goggi, H. Isahara, B. Maegaard, J. Mariani, H. Mazo, A. Moreno, J. Odijk, & S. Piperidis (Eds.), Proceedings of the Twelfth Language Resources and Evaluation Conference (pp. 3692–3697). European Language Resources Association. https://aclanthology.org/2020.lrec-1.455/ - Malmasi, S., Fang, A., Fetahu, B., Kar, S., & Rokhlenko, O. (2022). MultiCoNER: A large-scale multilingual dataset for complex named entity recognition. Proceedings of the 29th International Conference on Computational Linguistics, 3798–3809. International Committee on Computational Linguistics. https://aclanthology.org/2022.coling-1.334/ - NLLB Team, Costa-jussà, M. R., Cross, J., Çelebi, O., Elbayad, M., Heafield, K., Heffernan, K., Kalbassi, E., Lam, J., Licht, D., Maillard, J., Sun, A., Wang, S., Wenzek, G., Youngblood, A., Akula, B., Barrault, L., Mejia Gonzalez, G., Hansanti, P., Hoffman, J., Jarrett, S., Sadagopan, K. R., Rowe, D., Spruit, S., Tran, C., Andrews, P., Ayan, N. F., Bhosale, S., Edunov, S., Fan, A., Gao, C., Goswami, V., Guzmán, F., Koehn, P., Mourachko, A., Ropers, C., Saleem, S., Schwenk, H., & Wang, J. (2022). No language left behind: Scaling human-centered machine translation (No. arXiv: 2207.04672). arXiv. https://arxiv.org/abs/2207.04672 - Riley, P., Dozat, T., Botha, J. A., Garcia, X., Garrette, D., Riesa, J., Firat, O., & Constant, N. (2022). FRMT: A benchmark for few-shot region-aware machine translation (No. arXiv: 2210.00193). arXiv. https://doi.org/10.48550/ARXIV.2210.00193 - Specia, L., Harris, K., Blain, F., Burchardt, A., Macketanz, V., Skadiņa, I., Negri, M., & Turchi, M. (2017). Translation quality and productivity: A study on rich morphology languages. Proceedings of Machine Translation Summit XVI, 55–71. Nagoya, Japan. - Tiedemann, J. (2020). The Tatoeba translation challenge – Realistic data sets for low-resource and multilingual MT. Proceedings of the Fifth Conference on Machine Translation, 1174–1182. Association for Computational Linguistics. https://www.aclweb.org/anthology/2020.wmt-1.139 - Urbizu, G., San Vicente, I., Saralegi, X., Agerri, R., & Soroa, A. (2022). BasqueGLUE: A natural language understanding benchmark for Basque. Proceedings of the Language Resources and Evaluation Conference, 1603–1612. European Language Resources Association. https://aclanthology.org/2022.lrec-1.172 - Yang, Y., Zhang, Y., Tar, C., & Baldridge, J. (2019). PAWS-X: A cross-lingual adversarial dataset for paraphrase identification. Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP) (pp. 3687–3692). Association for Computational Linguistics. https://doi.org/10.18653/v1/D19-1382 - Zubillaga, M., Sainz, O., Estarrona, A., Lopez de Lacalle, O., & Agirre, E. (2024). Event extraction in Basque: Typologically motivated cross-lingual transfer-learning analysis (No. arXiv: 2404.06392). arXiv. https://arxiv.org/abs/2404.06392 </details> ## Evaluation Below are the evaluation results on the [Flores+200 devtest set](https://huggingface.co/datasets/openlanguagedata/flores_plus), compared against the state-of-the-art [MADLAD400-7B-mt model](https://huggingface.co/google/madlad400-7b-mt) ([Kudugunta, S., et al.](https://arxiv.org/abs/2309.04662)) and SalamandraTA-7b-base model. These results cover the translation directions CA-XX, ES-XX, EN-XX, as well as XX-CA, XX-ES, and XX-EN. The metrics have been computed excluding Asturian, Aranese, and Aragonese, as we report them separately. The evaluation was conducted using [MT-Lens](https://github.com/langtech-bsc/mt-evaluation), following the standard setting (beam search with beam size 5, limiting the translation length to 500 tokens). We report the following metrics: <details> <summary>Click to show metrics details</summary> - `BLEU`: Sacrebleu implementation. Signature: nrefs:1— case:mixed— eff:no— tok:13a— smooth:exp—version:2.3.1 - `TER`: Sacrebleu implementation. - `ChrF`: Sacrebleu implementation. - `Comet`: Model checkpoint: "Unbabel/wmt22-comet-da". - `Comet-kiwi`: Model checkpoint: "Unbabel/wmt22-cometkiwi-da". - `Bleurt`: Model checkpoint: "lucadiliello/BLEURT-20". - `MetricX`: Model checkpoint: "google/metricx-23-xl-v2p0". - `MetricX-QE`: Model checkpoint: "google/metricx-23-qe-xl-v2p0". </details> <details> <summary>English evaluation</summary> ### English This section presents the evaluation metrics for English translation tasks. | | Bleu↑ | Ter↓ | ChrF↑ | Comet↑ | Comet-kiwi↑ | Bleurt↑ | MetricX↓ | MetricX-QE↓ | |:---------------------------------|-------:|------:|-------:|--------:|-------------:|---------:|----------:|-------------:| | **EN-XX** | | | | | | | | | | SalamandraTA-7b-instruct | 35.20 | 53.40 | 61.58 | **0.89** | **0.86** | 0.78 | **0.96** | **0.81** | | MADLAD400-7B | **35.73** | **51.87** | **63.46** | 0.88 | 0.85 | **0.79** | 1.16 | 1.10 | | SalamandraTA-7b-base | 34.99 | 52.64 | 62.58 | 0.87 | 0.84 | 0.77 | 1.45 | 1.23 | | **XX-EN** | | | | | | | | | | SalamandraTA-7b-instruct | **44.37** | **42.49** | 68.29 | **0.89** | **0.86** | **0.80** | **1.05** | **0.99** | | MADLAD400-7B | 43.20 | 43.33 | 67.98 | **0.89** | **0.86** | **0.80** | 1.13 | 1.15 | | SalamandraTA-7b-base | 44.12 | 43.00 | **68.43** | **0.89** | 0.85 | **0.80** | 1.13 | 1.22 | <img src="./images/bleu_en.png" alt="English" width="100%"/> </details> <details> <summary>Spanish evaluation</summary> ### Spanish This section presents the evaluation metrics for Spanish translation tasks. | | Bleu↑ | Ter↓ | ChrF↑ | Comet↑ | Comet-kiwi↑ | Bleurt↑ | MetricX↓ | MetricX-QE↓ | |:---------------------------------|-------:|------:|-------:|--------:|-------------:|---------:|----------:|-------------:| | **ES-XX** | | | | | | | | | | SalamandraTA-7b-instruct | **23.68** | **67.31** | **53.98** | **0.87** | **0.83** | **0.76** | **0.93** | **0.80** | | MADLAD400-7B | 22.48 | 68.91 | 53.93 | 0.86 | **0.83** | 0.75 | 1.09 | 1.14 | | SalamandraTA-7b-base | 21.63 | 70.08 | 52.98 | 0.86 | **0.83** | 0.74 | 1.24 | 1.12 | | **XX-ES** | | | | | | | | | | SalamandraTA-7b-instruct | **26.40** | 62.27 | **53.54** | **0.85** | **0.84** | **0.74** | **0.80** | **1.07** | | MADLAD400-7B | 24.85 | **61.82** | 53.00 | **0.85** | **0.84** | **0.74** | 1.05 | 1.50 | | SalamandraTA-7b-base | 24.71 | 62.33 | 52.96 | **0.85** | **0.84** | 0.73 | 1.06 | 1.37 | <img src="./images/bleu_es.png" alt="English" width="100%"/> <img src="./images/es_xx_bars.png" alt="ESXX" width="100%"/> </details> <details> <summary>Catalan evaluation</summary> ### Catalan This section presents the evaluation metrics for Catalan translation tasks. | | Bleu↑ | Ter↓ | ChrF↑ | Comet↑ | Comet-kiwi↑ | Bleurt↑ | MetricX↓ | MetricX-QE↓ | |:---------------------------------|-------:|------:|-------:|--------:|-------------:|---------:|----------:|-------------:| | **CA-XX** | | | | | | | | | | SalamandraTA-7b-instruct | **29.50** | 59.26 | 58.21 | **0.88** | **0.81** | **0.77** | **0.97** | **0.98** | | MADLAD400-7B | 29.37 | **59.01** | **58.47** | 0.87 | **0.81** | **0.77** | 1.08 | 1.31 | | SalamandraTA-7b-base | 29.06 | 59.32 | 58.00 | 0.87 | **0.81** | 0.76 | 1.23 | 1.28 | | **XX-CA** | | | | | | | | | | SalamandraTA-7b-instruct | **34.51** | **54.21** | **60.10** | **0.86** | **0.81** | **0.76** | **0.90** | **1.29** | | MADLAD400-7B | 33.02 | 55.01 | 59.38 | **0.86** | **0.81** | 0.75 | 1.18 | 1.79 | | SalamandraTA-7b-base | 32.75 | 55.78 | 59.42 | **0.86** | **0.81** | 0.75 | 1.17 | 1.63 | <img src="./images/bleu_ca.png" alt="English" width="100%"/> </details> <details> <summary>Galician evaluation</summary> ### Galician This section presents the evaluation metrics for Galician translation tasks. | | Bleu↑ | Ter↓ | ChrF↑ | Comet↑ | Comet-kiwi↑ | Bleurt↑ | MetricX↓ | MetricX-QE↓ | |:---------------------------------|-------:|------:|-------:|--------:|-------------:|---------:|----------:|-------------:| | **GL-XX** | | | | | | | | | | SalamandraTA-7b-instruct | **36.95** | **50.12** | **62.55** | **0.88** | **0.85** | **0.77** | **0.86** | **0.98** | | MADLAD400-7B | 26.43 | 64.30 | 55.99 | 0.86 | **0.85** | 0.76 | 1.35 | 2.06 | | SalamandraTA-7b-base | 27.47 | 61.39 | 56.96 | 0.87 | 0.82 | 0.76 | 1.23 | 1.29 | | **XX-GL** | | | | | | | | | | SalamandraTA-7b-instruct | **34.37** | **52.49** | **60.99** | **0.88** | **0.85** | **0.73** | **0.75** | **0.92** | | MADLAD400-7B | 27.77 | 59.46 | 54.92 | 0.84 | **0.85** | 0.67 | 1.42 | 2.72 | | SalamandraTA-7b-base | 28.22 | 59.52 | 56.28 | 0.85 | 0.82 | 0.69 | 1.27 | 1.78 | <img src="./images/bleu_gl.png" alt="English" width="100%"/> </details> <details> <summary>Basque evaluation</summary> ### Basque This section presents the evaluation metrics for Basque translation tasks. | | Bleu↑ | Ter↓ | ChrF↑ | Comet↑ | Comet-kiwi↑ | Bleurt↑ | MetricX↓ | MetricX-QE↓ | |:---------------------------------|-------:|------:|-------:|--------:|-------------:|---------:|----------:|-------------:| | **EU-XX** | | | | | | | | | | SalamandraTA-7b-instruct | **29.89** | **58.54** | **56.66** | **0.87** | **0.85** | **0.76** | **0.90** | **0.89** | | MADLAD400-7B | 21.26 | 69.75 | 49.80 | 0.85 | 0.82 | 0.72 | 1.54 | 2.71 | | SalamandraTA-7b-base | 22.87 | 67.38 | 52.19 | 0.86 | 0.79 | 0.74 | 1.19 | 1.61 | | **XX-EU** | | | | | | | | | | SalamandraTA-7b-instruct | **18.89** | **71.74** | **57.16** | **0.87** | **0.84** | **0.82** | **0.58** | **0.44** | | MADLAD400-7B | 13.64 | 85.01 | 50.96 | 0.82 | 0.80 | 0.78 | 2.09 | 3.58 | | SalamandraTA-7b-base | 17.01 | 75.92 | 55.22 | 0.85 | 0.77 | 0.80 | 1.04 | 1.17 | <img src="./images/bleu_eu.png" alt="English" width="100%"/> </details> ### Low-Resource Languages of Spain The tables below summarize the performance metrics for English, Spanish, and Catalan to Asturian, Aranese and Aragonese compared against [Transducens/IbRo-nllb](https://huggingface.co/Transducens/IbRo-nllb) [(Galiano Jimenez, et al.)](https://aclanthology.org/2024.wmt-1.85/), [NLLB-200-3.3B](https://huggingface.co/facebook/nllb-200-3.3B) ([Costa-jussà et al., 2022](https://arxiv.org/abs/2207.04672)) and [SalamandraTA-2B](https://huggingface.co/BSC-LT/salamandraTA-2B). <details> <summary>English evaluation</summary> #### English-XX | | source | target | Bleu ↑ | Ter ↓ | ChrF ↑ | |:-------------------------|:---------|:---------|:----------|:----------|:----------| | SalamandraTA-7b-instruct | en | ast | **31.79** | **54.07** | **61.78** | | SalamandraTA-7b-base | en | ast | 26.40 | 64.02 | 57.35 | | Transducens/IbRo-nllb | en | ast | 20.56 | 63.92 | 53.32 | | | | | | | | | SalamandraTA-7b-instruct | en | arn | **22.77** | **66.06** | **52.61** | | SalamandraTA-7b-base | en | arn | 14.13 | 74.05 | 46.17 | | Transducens/IbRo-nllb | en | arn | 12.81 | 73.21 | 45.76 | | | | | | | | | SalamandraTA-7b-instruct | en | arg | **19.74** | 71.58 | **51.08** | | Transducens/IbRo-nllb | en | arg | 14.07 | **70.37** | 46.89 | | SalamandraTA-7b-base | en | arg | 12.24 | 73.48 | 44.75 | </details> <details> <summary>Spanish evaluation</summary> #### Spanish-XX | | source | target | Bleu ↑ | Ter ↓ | ChrF ↑ | |:-------------------------|:---------|:---------|:----------|:----------|:----------| | SalamandraTA-7b-instruct | es | ast | **20.66** | **71.81** | **53.14** | | SalamandraTA-7b-base | es | ast | 17.65 | 75.78 | 51.05 | | Transducens/IbRo-nllb | es | ast | 16.79 | 76.36 | 50.89 | | | | | | | | | SalamandraTA-7b-base | es | arn | **51.59** | **35.51** | **73.50** | | Transducens/IbRo-nllb | es | arn | 50.20 | 36.60 | 73.16 | | SalamandraTA-7b-instruct | es | arn | 47.37 | 39.29 | 70.65 | | | | | | | | | Transducens/IbRo-nllb | es | arg | **59.75** | **28.01** | **78.73** | | SalamandraTA-7b-base | es | arg | 53.96 | 31.51 | 76.08 | | SalamandraTA-7b-instruct | es | arg | 44.10 | 39.98 | 71.12 | </details> <details> <summary>Catalan evaluation</summary> #### Catalan-XX | | source | target | Bleu ↑ | Ter ↓ | ChrF ↑ | |:-------------------------|:---------|:---------|:----------|:----------|:----------| | SalamandraTA-7b-instruct | ca | ast | **28.13** | **58.84** | **58.98** | | SalamandraTA-7b-base | ca | ast | 26.11 | 63.63 | 58.08 | | Transducens/IbRo-nllb | ca | ast | 24.77 | 61.60 | 57.49 | | | | | | | | | SalamandraTA-7b-base | ca | arn | **31.76** | **53.71** | **60.71** | | Transducens/IbRo-nllb | ca | arn | 31.22 | 54.30 | 60.30 | | SalamandraTA-7b-instruct | ca | arn | 30.89 | 54.70 | 59.78 | | | | | | | | | Transducens/IbRo-nllb | ca | arg | **24.44** | **60.79** | **55.51** | | SalamandraTA-7b-base | ca | arg | 22.53 | 62.37 | 54.32 | | SalamandraTA-7b-instruct | ca | arg | 20.96 | 65.64 | 52.41 | </details> ### Gender Aware Translation Below are the evaluation results for gender aware translation evaluated on the [MT-GenEval](https://github.com/amazon-science/machine-translation-gender-eval?tab=readme-ov-file#mt-geneval) dataset ([Currey, A. et al.](https://github.com/amazon-science/machine-translation-gender-eval?tab=readme-ov-file#mt-geneval)). These have been calculated for translation from English into German, Spanish, French, Italian, Portuguese and Russian and are compared against [MADLAD400-7B-mt](https://huggingface.co/google/madlad400-7b-mt), [TowerInstruct-7B-v0.2](https://huggingface.co/Unbabel/TowerInstruct-7B-v0.2) and the SalamandraTA-7b-base model. Evaluation was conducted using [MT-Lens](https://github.com/langtech-bsc/mt-evaluation) and is reported as accuracy computed using the accuracy metric provided with MT-GenEval. <details> | | Source | Target | Masc | Fem | Pair | |:---------------------------------|:---------|:---------|-------:|-------:|-------:| | SalamandraTA-7b-instruct | en | de | **0.883** | **0.883** | **0.773** | | SalamandraTA-7b-base | en | de | 0.857 | 0.77 | 0.66 | | MADLAD400-7B-mt | en | de | 0.877 | 0.823 | 0.713 | | TowerInstruct-7B-v0.2 | en | de | 0.863 | 0.84 | 0.727 | | | | | | | | | SalamandraTA-7b-instruct | en | es | 0.867 | **0.85** | **0.737** | | SalamandraTA-7b-base | en | es | **0.89** | 0.733 | 0.643 | | MADLAD400-7B-mt | en | es | 0.887 | 0.78 | 0.687 | | TowerInstruct-7B-v0.2 | en | es | 0.85 | 0.823 | 0.693 | | | | | | | | | SalamandraTA-7b-instruct | en | fr | **0.9** | 0.82 | **0.737** | | SalamandraTA-7b-base | en | fr | 0.8867 | 0.71 | 0.617 | | MADLAD400-7B-mt | en | fr | 0.873 | 0.777 | 0.663 | | TowerInstruct-7B-v0.2 | en | fr | 0.88 | **0.823** | 0.717 | | | | | | | | | SalamandraTA-7b-instruct | en | it | 0.9 | **0.763** | 0.683 | | SalamandraTA-7b-base | en | it | 0.893 | 0.593 | 0.513 | | MADLAD400-7B-mt | en | it | 0.907 | 0.663 | 0.597 | | TowerInstruct-7B-v0.2 | en | it | **0.947** | 0.747 | **0.713** | | | | | | | | | SalamandraTA-7b-instruct | en | pt | 0.92 | **0.77** | **0.707** | | SalamandraTA-7b-base | en | pt | **0.923** | 0.65 | 0.597 | | MADLAD400-7B-mt | en | pt | **0.923** | 0.687 | 0.627 | | TowerInstruct-7B-v0.2 | en | pt | 0.907 | 0.73 | 0.67 | | | | | | | | | SalamandraTA-7b-instruct | en | ru | **0.95** | **0.837** | **0.793** | | SalamandraTA-7b-base | en | ru | 0.933 | 0.713 | 0.653 | | MADLAD400-7B-mt | en | ru | 0.94 | 0.797 | 0.74 | | TowerInstruct-7B-v0.2 | en | ru | 0.933 | 0.797 | 0.733 | <img src="./images/geneval.png"/> </details> ## Ethical Considerations and Limitations Detailed information on the work done to examine the presence of unwanted social and cognitive biases in the base model can be found at [Salamandra-7B model card](https://huggingface.co/BSC-LT/salamandra-7b). With regard to MT models, the only analysis related to bias which we have conducted is the MT-GenEval evaluation. No specific analysis has yet been carried out in order to evaluate potential biases or limitations in translation accuracy across different languages, dialects, or domains. However, we recognize the importance of identifying and addressing any harmful stereotypes, cultural inaccuracies, or systematic performance discrepancies that may arise in Machine Translation. As such, we plan to continue performing more analyses as we implement the necessary metrics and methods within our evaluation framework [MT-Lens](https://github.com/langtech-bsc/mt-evaluation). Note that the model has only undergone preliminary instruction tuning. We urge developers to consider potential limitations and conduct safety testing and tuning tailored to their specific applications. ## Additional information ### Author The Language Technologies Unit from Barcelona Supercomputing Center. ### Contact For further information, please send an email to <[email protected]>. ### Copyright Copyright(c) 2025 by Language Technologies Unit, Barcelona Supercomputing Center. ### Funding This work has been promoted and financed by the Government of Catalonia through the [Aina Project](https://projecteaina.cat/). This work is funded by the _Ministerio para la Transformación Digital y de la Función Pública_ - Funded by EU – NextGenerationEU within the framework of [ILENIA Project](https://proyectoilenia.es/) with reference 2022/TL22/00215337. ### Acknowledgements The success of this project has been made possible thanks to the invaluable contributions of our partners in the [ILENIA Project](https://proyectoilenia.es/): [HiTZ](http://hitz.ehu.eus/es), and [CiTIUS](https://citius.gal/es/). Their efforts have been instrumental in advancing our work, and we sincerely appreciate their help and support. ### Disclaimer ### Disclaimer Be aware that the model may contain biases or other unintended distortions. When third parties deploy systems or provide services based on this model, or use the model themselves, they bear the responsibility for mitigating any associated risks and ensuring compliance with applicable regulations, including those governing the use of Artificial Intelligence. The Barcelona Supercomputing Center, as the owner and creator of the model, shall not be held liable for any outcomes resulting from third-party use. ### License [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0) ### Citation If you find our model useful, we would appreciate if you could cite our work as follows: ``` @article{title={SalamandraTA: A European Multilingual Large Language Model for Translation-Related Tasks}, author={Javier García Gilabert, Carlos Escolano, Audrey Mash, Xixian Liao, Francesca De Luca Fornaciari, Miguel Claramunt Argote, Ella Bohman and Maite Melero}, organization={Barcelona Supercomputing Center}, year={2025}, url={https://huggingface.co/BSC-LT/salamandraTA-7b-instruct} } ```
GabrielMM/Math_SFT_v2_9epoch
GabrielMM
2025-05-30T07:41:54Z
2
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-30T07:41:17Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
hiwiy/roberta-classification
hiwiy
2025-05-30T07:41:45Z
0
0
transformers
[ "transformers", "safetensors", "roberta", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-05-30T07:18:13Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
2yunadaaa/qwen2.5-4b-3kingdoms-augmented
2yunadaaa
2025-05-30T07:39:54Z
0
0
transformers
[ "transformers", "text-generation-inference", "unsloth", "qwen2", "trl", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-05-30T07:39:43Z
--- base_model: unsloth/qwen2.5-7b-instruct-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - qwen2 - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** 2yunadaaa - **License:** apache-2.0 - **Finetuned from model :** unsloth/qwen2.5-7b-instruct-unsloth-bnb-4bit This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
vmerinoc/openai-whisper-medium-lora-col
vmerinoc
2025-05-30T07:36:41Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-05-30T07:36:37Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
tonglaovn/llama3_8B_finetuned_sport_tva
tonglaovn
2025-05-30T07:28:24Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "trl", "sft", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2025-05-30T07:26:52Z
--- library_name: transformers tags: - trl - sft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
juniechu/finetuned-qa-melayu
juniechu
2025-05-30T07:28:03Z
2
0
transformers
[ "transformers", "safetensors", "bert", "question-answering", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
question-answering
2025-05-30T07:24:08Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
TOMFORD79/Tom8
TOMFORD79
2025-05-30T07:27:35Z
0
0
null
[ "safetensors", "any-to-any", "omega", "omegalabs", "bittensor", "agi", "license:mit", "region:us" ]
any-to-any
2025-05-30T06:16:10Z
--- license: mit tags: - any-to-any - omega - omegalabs - bittensor - agi --- This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet. Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
izeng0975/results
izeng0975
2025-05-30T07:26:12Z
5
0
transformers
[ "transformers", "safetensors", "hubert", "audio-classification", "generated_from_trainer", "base_model:ntu-spml/distilhubert", "base_model:finetune:ntu-spml/distilhubert", "license:apache-2.0", "endpoints_compatible", "region:us" ]
audio-classification
2025-05-30T07:26:00Z
--- library_name: transformers license: apache-2.0 base_model: ntu-spml/distilhubert tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: results results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # results This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0242 - Accuracy: 0.9959 - F1: 0.9959 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-------:|:----:|:---------------:|:--------:|:------:| | 0.0039 | 16.1290 | 500 | 0.0242 | 0.9959 | 0.9959 | ### Framework versions - Transformers 4.52.3 - Pytorch 2.6.0+cu124 - Datasets 3.6.0 - Tokenizers 0.21.1
Inect2/loop_silver_experience
Inect2
2025-05-30T07:26:11Z
0
0
null
[ "base_model:black-forest-labs/FLUX.1-dev", "base_model:finetune:black-forest-labs/FLUX.1-dev", "region:us" ]
null
2025-05-07T09:54:16Z
--- base_model: - black-forest-labs/FLUX.1-dev trigger_word: - vxq9_loop dataset: - https://images.inku.tech/datasets/771eec74-d034-405f-85ca-05088823888f ---
AmberYifan/Llama-3.1-8B-sft-SPIN-gpt4o-beta0.7-lr1e-7
AmberYifan
2025-05-30T07:24:42Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "generated_from_trainer", "trl", "dpo", "conversational", "arxiv:2305.18290", "base_model:AmberYifan/Llama-3.1-8B-sft-ultrachat-safeRLHF", "base_model:finetune:AmberYifan/Llama-3.1-8B-sft-ultrachat-safeRLHF", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-30T07:04:21Z
--- base_model: AmberYifan/Llama-3.1-8B-sft-ultrachat-safeRLHF library_name: transformers model_name: Llama-3.1-8B-sft-SPIN-gpt4o-beta0.7-lr1e-7 tags: - generated_from_trainer - trl - dpo licence: license --- # Model Card for Llama-3.1-8B-sft-SPIN-gpt4o-beta0.7-lr1e-7 This model is a fine-tuned version of [AmberYifan/Llama-3.1-8B-sft-ultrachat-safeRLHF](https://huggingface.co/AmberYifan/Llama-3.1-8B-sft-ultrachat-safeRLHF). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="AmberYifan/Llama-3.1-8B-sft-SPIN-gpt4o-beta0.7-lr1e-7", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/yifanwang/huggingface/runs/fyo8pggt) This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290). ### Framework versions - TRL: 0.12.2 - Transformers: 4.46.3 - Pytorch: 2.7.0 - Datasets: 3.6.0 - Tokenizers: 0.20.3 ## Citations Cite DPO as: ```bibtex @inproceedings{rafailov2023direct, title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}}, author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn}, year = 2023, booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023}, url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html}, editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
fsa32442/bert-base-japanese-v3-jnli
fsa32442
2025-05-30T07:23:38Z
0
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-05-30T07:23:16Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
pot99rta/DarkThink-DirectiveReasoner-12B-GGUF
pot99rta
2025-05-30T07:23:12Z
1
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "llama-cpp", "gguf-my-repo", "base_model:pot99rta/DarkThink-DirectiveReasoner-12B", "base_model:quantized:pot99rta/DarkThink-DirectiveReasoner-12B", "endpoints_compatible", "region:us", "conversational" ]
null
2025-05-30T01:48:05Z
--- base_model: pot99rta/DarkThink-DirectiveReasoner-12B library_name: transformers tags: - mergekit - merge - llama-cpp - gguf-my-repo --- # DarkThink-DirectiveReasoner-12B-GGUF ![image/png](https://cdn-uploads.huggingface.co/production/uploads/636ea389fd9751c3d081e88e/PuQfLiDExyqXb1diCWR8h.png) More Robust with all the Darkness added. ```Models Merged:``` ```1. ReadyArt/Omega-Darker_The-Final-Directive-12B``` ```2. pot99rta/MagcarpMell-ThinkandReasoner-12B``` ```Preset:``` ```Use ChatML or Mistral``` ChatML works better for reasoning due to Magicap and MagMell being ChatML for their base Models. Just realized I've been spelling Magcap with 'Magcarp' this WHOLE time.. This model was converted to GGUF format from [`pot99rta/DarkThink-DirectiveReasoner-12B`](https://huggingface.co/pot99rta/DarkThink-DirectiveReasoner-12B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/pot99rta/DarkThink-DirectiveReasoner-12B) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo pot99rta/DarkThink-DirectiveReasoner-12B-Q8_0-GGUF --hf-file darkthink-directivereasoner-12b-q8_0.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo pot99rta/DarkThink-DirectiveReasoner-12B-Q8_0-GGUF --hf-file darkthink-directivereasoner-12b-q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo pot99rta/DarkThink-DirectiveReasoner-12B-Q8_0-GGUF --hf-file darkthink-directivereasoner-12b-q8_0.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo pot99rta/DarkThink-DirectiveReasoner-12B-Q8_0-GGUF --hf-file darkthink-directivereasoner-12b-q8_0.gguf -c 2048 ```
geninhu/RakutenAI-7B-instruct-GPTQ
geninhu
2025-05-30T07:22:24Z
9
0
null
[ "mistral", "gptq", "4bit", "vllm", "quantized", "en", "ja", "base_model:Rakuten/RakutenAI-7B-instruct", "base_model:quantized:Rakuten/RakutenAI-7B-instruct", "license:apache-2.0", "4-bit", "region:us" ]
null
2025-05-30T07:18:55Z
--- base_model: Rakuten/RakutenAI-7B-instruct inference: false language: - en - ja license: apache-2.0 model_creator: Rakuten model_type: llama quantized_by: auto-gptq tags: - gptq - 4bit - vllm - quantized --- # RakutenAI-7B-instruct GPTQ This is a 4-bit GPTQ quantized version of [Rakuten/RakutenAI-7B-instruct](https://huggingface.co/Rakuten/RakutenAI-7B-instruct). ## Quantization Details - Method: GPTQ - Bits: 4 - Group size: 128 - Symmetric: True ## Usage with vLLM ```python from vllm import LLM llm = LLM(model="geninhu/RakutenAI-7B-instruct-GPTQ") ``` ## Usage with Transformers ```python from auto_gptq import AutoGPTQForCausalLM from transformers import AutoTokenizer model = AutoGPTQForCausalLM.from_quantized("geninhu/RakutenAI-7B-instruct-GPTQ") tokenizer = AutoTokenizer.from_pretrained("geninhu/RakutenAI-7B-instruct-GPTQ") ```
FormlessAI/263fe867-90ca-40be-8a7d-14ea000e4906
FormlessAI
2025-05-30T07:21:23Z
4
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "generated_from_trainer", "trl", "sft", "base_model:jingyeom/seal3.1.6n_7b", "base_model:finetune:jingyeom/seal3.1.6n_7b", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-30T03:04:50Z
--- base_model: jingyeom/seal3.1.6n_7b library_name: transformers model_name: 263fe867-90ca-40be-8a7d-14ea000e4906 tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for 263fe867-90ca-40be-8a7d-14ea000e4906 This model is a fine-tuned version of [jingyeom/seal3.1.6n_7b](https://huggingface.co/jingyeom/seal3.1.6n_7b). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="FormlessAI/263fe867-90ca-40be-8a7d-14ea000e4906", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/phoenix-formless/Gradients/runs/723j4s5d) This model was trained with SFT. ### Framework versions - TRL: 0.18.0 - Transformers: 4.52.3 - Pytorch: 2.7.0+cu128 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
davidelobba/TEMU-VTOFF
davidelobba
2025-05-30T07:19:42Z
0
0
diffusers
[ "diffusers", "safetensors", "image-generation", "image-to-image", "virtual-try-on", "virtual-try-off", "diffusion", "dit", "stable-diffusion-3", "multimodal", "fashion", "pytorch", "en", "dataset:dresscode", "dataset:viton-hd", "arxiv:2505.21062", "base_model:stabilityai/stable-diffusion-3-medium-diffusers", "base_model:finetune:stabilityai/stable-diffusion-3-medium-diffusers", "license:cc-by-nc-4.0", "region:us" ]
image-to-image
2025-05-28T21:06:14Z
--- license: cc-by-nc-4.0 base_model: - stabilityai/stable-diffusion-3-medium-diffusers pipeline_tag: image-to-image tags: - image-generation - image-to-image - virtual-try-on - virtual-try-off - diffusion - dit - stable-diffusion-3 - multimodal - fashion - pytorch language: en datasets: - dresscode - viton-hd --- <div align="center"> <h1 align="center">TEMU-VTOFF</h1> <h3 align="center">Text-Enhanced MUlti-category Virtual Try-Off</h3> </div> <div align="center"> <picture> <source srcset="/davidelobba/TEMU-VTOFF/resolve/main/teaser.png" media="(prefers-color-scheme: dark)"> <img src="/davidelobba/TEMU-VTOFF/resolve/main/teaser.png" width="75%" alt="TEMU-VTOFF Teaser"> </source> </picture> </div> <div align="center"> **Inverse Virtual Try-On: Generating Multi-Category Product-Style Images from Clothed Individuals** [Davide Lobba](https://scholar.google.com/citations?user=WEMoLPEAAAAJ&hl=en&oi=ao)<sup>1,2,\*</sup>, [Fulvio Sanguigni](https://scholar.google.com/citations?user=tSpzMUEAAAAJ&hl=en)<sup>2,3,\*</sup>, [Bin Ren](https://scholar.google.com/citations?user=Md9maLYAAAAJ&hl=en)<sup>1,2</sup>, [Marcella Cornia](https://scholar.google.com/citations?user=DzgmSJEAAAAJ&hl=en)<sup>3</sup>, [Rita Cucchiara](https://scholar.google.com/citations?user=OM3sZEoAAAAJ&hl=en)<sup>3</sup>, [Nicu Sebe](https://scholar.google.com/citations?user=stFCYOAAAAAJ&hl=en)<sup>1</sup> <sup>1</sup>University of Trento, <sup>2</sup>University of Pisa, <sup>3</sup>University of Modena and Reggio Emilia <sup>*</sup> Equal contribution </div> <div align="center"> <a href="https://arxiv.org/abs/2505.21062" style="margin: 0 2px;"> <img src="https://img.shields.io/badge/Paper-Arxiv_2505.21062-darkred.svg" alt="Paper"> </a> <a href="https://temu-vtoff-page.github.io/" style="margin: 0 2px;"> <img src='https://img.shields.io/badge/Webpage-Project-silver?style=flat&logo=&logoColor=orange' alt='Project Webpage'> </a> <a href="https://github.com/davidelobba/TEMU-VTOFF" style="margin: 0 2px;"> <img src="https://img.shields.io/badge/GitHub-Repo-blue.svg?logo=github" alt="GitHub Repository"> </a> <!-- The Hugging Face model badge will be automatically displayed on the model page --> </div> ## 💡 Model Description **TEMU-VTOFF** is a novel dual-DiT (Diffusion Transformer) architecture designed for the Virtual Try-Off task: generating in-shop images of garments worn by a person. By combining a pretrained feature extractor with a text-enhanced generation module, our method can handle occlusions, multiple garment categories, and ambiguous appearances. It further refines generation fidelity via a feature alignment module based on DINOv2. This model is based on `stabilityai/stable-diffusion-3-medium-diffusers`. The uploaded weights correspond to the finetuned feature extractor and the VTOFF DiT module. ## ✨ Key Features Our contribution can be summarized as follows: - **🎯 Multi-Category Try-Off**. We present a unified framework capable of handling multiple garment types (upper-body, lower-body, and full-body clothes) without requiring category-specific pipelines. - **🔗 Multimodal Hybrid Attention**. We introduce a novel attention mechanism that integrates garment textual descriptions into the generative process by linking them with person-specific features. This helps the model synthesize occluded or ambiguous garment regions more accurately. - **⚡ Garment Aligner Module**. We design a lightweight aligner that conditions generation on clean garment images, replacing conventional denoising objectives. This leads to better alignment consistency on the overall dataset and preserves more precise visual retention. - **📊 Extensive experiments**. Experiments on the Dress Code and VITON-HD datasets demonstrate that TEMU-VTOFF outperforms prior methods in both the quality of generated images and alignment with the target garment, highlighting its strong generalization capabilities.
Nourix44/Nourix33332
Nourix44
2025-05-30T07:19:09Z
0
0
null
[ "region:us" ]
null
2025-05-30T07:18:57Z
Nourix est un complément à base de plantes de qualité supérieure conçu pour favoriser la gestion naturelle du poids et le bien-être général. Conçu pour ceux qui recherchent une approche équilibrée de la santé, il combine des ingrédients scientifiquement prouvés qui stimulent le métabolisme, suppriment l'appétit, augmentent l'énergie et favorisent la détoxification. ## **[Cliquez ici pour commander sur le site officiel de Nourix](https://capsules24x7.com/nourix-france)** ## Nourix : Plus qu'une pilule Nourix est essentiellement un complément à base de plantes conçu pour soutenir la gestion du poids en augmentant le métabolisme, en supprimant l'appétit et en améliorant les niveaux d'énergie. Mais ce qui le distingue des autres, c’est son image de marque comme un choix de vie holistique, et non pas seulement comme une solution rapide. La marque est commercialisée via des sites Web élégants qui répondent au désir du consommateur moderne en matière d'authenticité, de durabilité et de soins personnels. Sa formule végétalienne, sans gluten et sans OGM trouve un écho auprès d’une génération qui privilégie les modes de vie sains. Nourix se positionne comme un partenaire dans un parcours de santé plus large, encourageant les utilisateurs à adopter une alimentation consciente, un exercice joyeux et un bien-être mental. Cette philosophie en a fait une référence culturelle, notamment en France, où les tendances bien-être entrent souvent en collision avec les traditions culinaires et les sensibilités esthétiques. ## Ingrédients : une combinaison de nature et d'innovation La formule Nourix est une lettre d’amour à la nature, alliant plantes anciennes et science nutritionnelle moderne. Chaque ingrédient est choisi non seulement pour son efficacité mais aussi pour sa résonance culturelle, évoquant un sentiment d’héritage et de confiance. Voici un autre aperçu de ses composants clés : Extrait de thé vert : un hommage aux anciennes pratiques de santé asiatiques. Les catéchines contenues dans le thé vert déclenchent la thermogenèse et aident à brûler des calories. Ses propriétés antioxydantes correspondent à la préférence des Français pour la longévité et l’éclat. Berbérine : Extraite de la berbérine, la berbérine fait partie d'un changement global vers la santé métabolique et plaît à ceux qui se méfient de la prise de poids induite par le sucre. Gingembre : Un ingrédient important dans la cuisine française et la phytothérapie. L'effet réchauffant du gingembre améliore le métabolisme et facilite la digestion, procurant aux utilisateurs des sensations gustatives familières. Cannelle : La cannelle crée une atmosphère chaleureuse et stimulante, freine les envies et stabilise les niveaux de glucose, ce qui en fait un pont entre le plaisir et la discipline. Vinaigre de cidre de pomme : Ce véritable trésor des influenceurs bien-être est un ingrédient coupe-faim qui résonne avec la tendance « aliments fonctionnels » sur les réseaux sociaux. Poivre de Cayenne : Les propriétés thermogéniques du poivre de Cayenne apportent une touche épicée et conviennent à un style de vie audacieux et aventureux qui plaît à ceux qui recherchent l'intensité. Chardon-Marie : Le chardon-Marie trouve ses racines dans la phytothérapie européenne. Il soutient la santé du foie et fait partie de l’engouement pour la détox qui domine la culture de la santé. Ces ingrédients sont présentés sous forme de deux capsules quotidiennes, à prendre avec de l’eau au cours d’un repas. La teneur modérée en caféine du produit (30 mg par portion) procure un regain d'énergie doux et évite la surstimulation typique des produits concurrents. ## L'impact culturel de Nourix Nourix a transcendé son rôle de complément alimentaire et est devenu un phénomène culturel, notamment en France, où il s'inscrit parfaitement dans les tendances bien-être et lifestyle. Voici comment cela a fait sensation : Médias sociaux et culture d'influence : sur des plateformes comme Instagram et X, Nourix est un hashtag favori, les utilisateurs partageant des démos esthétiques de leurs capsules aux côtés de bols de smoothie et de tapis de yoga. Les influenceurs, des gourous du fitness parisiens aux coachs holistiques provençaux, utilisent Nourix dans le cadre de leurs « routines bien-être chics », renforçant ainsi son attrait. Créer une communauté : la marque favorise un sentiment d’appartenance à travers des forums en ligne et des groupes de médias sociaux où les utilisateurs partagent des recettes, des conseils d’entraînement et des histoires de réussite. Cette approche communautaire reflète la tradition française des repas en commun, remise au goût du jour à l’ère du numérique. ## **[Cliquez ici pour commander sur le site officiel de Nourix](https://capsules24x7.com/nourix-france)** Positivité corporelle et réalisme : contrairement aux marques agressives de perte de poids, Nourix a un récit équilibré qui place la santé avant la perfection. Son marketing met en avant divers types de corps et des histoires qui résonnent avec le changement mondial vers le bien-être inclusif. Buzz de la culture pop : les rumeurs sur l'implication de Nourix dans des séries télévisées françaises comme 66 Minutes de M6 – bien que non confirmées – ont alimenté sa mystique et l'ont positionné comme une figure « au courant » parmi les faiseurs de goût. Cette résonance culturelle a fait de Nourix une marque lifestyle comparable au fait de porter un sac réutilisable ou de siroter du lait d’avoine. Il ne s’agit pas seulement de perdre du poids ; Il s’agit d’un style de vie conscient et dynamique. ## Futur : L'avenir de Nourix À mesure que Nourix se développe, son potentiel réside dans l’approfondissement de ses racines culturelles et la résolution des problèmes de confiance. Les fonctionnalités possibles incluent : Une meilleure transparence : la publication d’informations d’achat claires, de certificats de laboratoire ou d’une adresse physique peut faire taire les sceptiques. Développez votre communauté : organiser des événements fitness ou établir des partenariats avec des salles de sport françaises peut amener votre communauté numérique hors ligne. Innovation : L’introduction de nouvelles formes, comme la poudre ou le chewing-gum, peut attirer les jeunes utilisateurs. Pression mondiale : se développer au-delà de la France grâce au marketing local peut profiter à des marchés comme les États-Unis ou l’Asie. ## Réflexions finales Nourix est plus qu’un complément de gestion du poids : c’est un mouvement culturel qui allie science, style et communauté. Sa formule naturelle, à base d'ingrédients tels que le thé vert et la berbérine, offre un outil pratique pour ceux qui souhaitent vivre un mode de vie plus sain. Son influence culturelle, de l’esthétique d’Instagram aux forums axés sur les utilisateurs, en fait un phare du bien-être moderne. ## **[Cliquez ici pour commander sur le site officiel de Nourix](https://capsules24x7.com/nourix-france)**
tomerRest/line_item_embeddings
tomerRest
2025-05-30T07:18:53Z
31
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:54000", "loss:CosineSimilarityLoss", "arxiv:1908.10084", "base_model:sentence-transformers/all-MiniLM-L6-v2", "base_model:finetune:sentence-transformers/all-MiniLM-L6-v2", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2025-05-30T07:17:26Z
--- tags: - sentence-transformers - sentence-similarity - feature-extraction - generated_from_trainer - dataset_size:54000 - loss:CosineSimilarityLoss base_model: sentence-transformers/all-MiniLM-L6-v2 widget: - source_sentence: N/GOUR WHT SOURDOUGH SLICED 750G sentences: - FISH TEMPURA 115GM X 30PCS - BRIOCHE SQ SWICH LARGE - CASSAVA CRACKER 250GM Maxi - source_sentence: MUFFINS RASPBERRY & WHITE CHOCOLATE sentences: - Soft Dinner Roll 35 50pcs - Lemon Meringue Donut - 400 GRADI MALLOREDDUS PTN TRAY (15) - source_sentence: Blue Swimmer Crab 140g+, 1kg pack, 6kg carton (Imported) sentences: - CANOLA OIL SPRAY PINNACLE 450G - Cayenne Red 1KG - THE BOTANIST GIN (1X700ML) - source_sentence: Bistro oyster Tasmanian sentences: - Broken Prawn Meat - CHEESE RICOTTA 1KG RED BASKET VAC - '[DESSERTS) EQ Ice Cream Bacio (4kg/tub)' - source_sentence: Apple Crumble Muffin sentences: - DICED BEEF - GAROFALO PAPPARDELLE NO.1-35 [500GR/PKT] [12/CTN] - Vegan Lemon Blueberry Friand -6pk pipeline_tag: sentence-similarity library_name: sentence-transformers --- # SentenceTransformer based on sentence-transformers/all-MiniLM-L6-v2 This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2). It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) <!-- at revision c9745ed1d9f207416be6d2e6f8de32d1f16199bf --> - **Maximum Sequence Length:** 256 tokens - **Output Dimensionality:** 384 dimensions - **Similarity Function:** Cosine Similarity <!-- - **Training Dataset:** Unknown --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("tomerRest/line_item_embeddings") # Run inference sentences = [ 'Apple Crumble Muffin', 'Vegan Lemon Blueberry Friand -6pk', 'GAROFALO PAPPARDELLE NO.1-35 [500GR/PKT] [12/CTN]', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 384] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### Unnamed Dataset * Size: 54,000 training samples * Columns: <code>sentence_0</code>, <code>sentence_1</code>, and <code>label</code> * Approximate statistics based on the first 1000 samples: | | sentence_0 | sentence_1 | label | |:--------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:--------------------------------------------------------------| | type | string | string | float | | details | <ul><li>min: 4 tokens</li><li>mean: 11.64 tokens</li><li>max: 40 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 12.0 tokens</li><li>max: 35 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.5</li><li>max: 1.0</li></ul> | * Samples: | sentence_0 | sentence_1 | label | |:------------------------------------------------|:-------------------------------------|:-----------------| | <code>HTARB TARRAGON BUNCH</code> | <code>Chives - Garlic</code> | <code>1.0</code> | | <code>CHICKEN THIGH BURGER CUT 140G</code> | <code>Herb (N-Z)-Parilla</code> | <code>0.0</code> | | <code>12.5kg Self Raising Flour-SUNFIELD</code> | <code>ISM SALT SACHETS 2000'S</code> | <code>0.0</code> | * Loss: [<code>CosineSimilarityLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosinesimilarityloss) with these parameters: ```json { "loss_fct": "torch.nn.modules.loss.MSELoss" } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `per_device_train_batch_size`: 32 - `per_device_eval_batch_size`: 32 - `num_train_epochs`: 4 - `multi_dataset_batch_sampler`: round_robin #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: no - `prediction_loss_only`: True - `per_device_train_batch_size`: 32 - `per_device_eval_batch_size`: 32 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `torch_empty_cache_steps`: None - `learning_rate`: 5e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1 - `num_train_epochs`: 4 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.0 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: False - `fp16`: False - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: False - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: None - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `include_for_metrics`: [] - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `use_liger_kernel`: False - `eval_use_gather_object`: False - `average_tokens_across_devices`: False - `prompts`: None - `batch_sampler`: batch_sampler - `multi_dataset_batch_sampler`: round_robin </details> ### Training Logs | Epoch | Step | Training Loss | |:------:|:----:|:-------------:| | 0.2962 | 500 | 0.1769 | | 0.5924 | 1000 | 0.1269 | | 0.8886 | 1500 | 0.1018 | | 1.1848 | 2000 | 0.0838 | | 1.4810 | 2500 | 0.0725 | | 1.7773 | 3000 | 0.0623 | | 2.0735 | 3500 | 0.056 | | 2.3697 | 4000 | 0.0478 | | 2.6659 | 4500 | 0.0485 | | 2.9621 | 5000 | 0.0457 | | 3.2583 | 5500 | 0.0412 | | 3.5545 | 6000 | 0.0406 | | 3.8507 | 6500 | 0.039 | ### Framework Versions - Python: 3.12.7 - Sentence Transformers: 3.3.1 - Transformers: 4.49.0 - PyTorch: 2.6.0 - Accelerate: 1.4.0 - Datasets: 3.4.1 - Tokenizers: 0.21.1 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
OpenVINO/Qwen3-4B-int4-ov
OpenVINO
2025-05-30T07:18:48Z
1,339
0
null
[ "openvino", "qwen3", "base_model:Qwen/Qwen3-4B", "base_model:quantized:Qwen/Qwen3-4B", "license:apache-2.0", "region:us" ]
null
2025-04-30T13:05:41Z
--- license: apache-2.0 license_link: https://huggingface.co/Qwen/Qwen3-4B/blob/main/LICENSE base_model: - Qwen/Qwen3-4B base_model_relation: quantized --- # Qwen3-4B-int4-ov * Model creator: [Qwen](https://huggingface.co/Qwen) * Original model: [Qwen3-4B](https://huggingface.co/Qwen/Qwen3-4B) ## Description This is [Qwen3-4B](https://huggingface.co/Qwen/Qwen3-4B) model converted to the [OpenVINO™ IR](https://docs.openvino.ai/2025/documentation/openvino-ir-format.html) (Intermediate Representation) format with weights compressed to INT4 by [NNCF](https://github.com/openvinotoolkit/nncf). ## Quantization Parameters Weight compression was performed using `nncf.compress_weights` with the following parameters: * mode: **INT4_SYM** * ratio: **1.0** * group_size: **128** * awq: **True** * scale_estimation: **True** * dataset: [wikitext2](https://huggingface.co/datasets/mindchain/wikitext2) For more information on quantization, check the [OpenVINO model optimization guide](https://docs.openvino.ai/2025/openvino-workflow/model-optimization-guide/weight-compression.html). ## Compatibility The provided OpenVINO™ IR model is compatible with: * OpenVINO version 2025.1.0 and higher * Optimum Intel 1.24.0 and higher ## Running Model Inference with [Optimum Intel](https://huggingface.co/docs/optimum/intel/index) 1. Install packages required for using [Optimum Intel](https://huggingface.co/docs/optimum/intel/index) integration with the OpenVINO backend: ``` pip install optimum[openvino] ``` 2. Run model inference: ``` from transformers import AutoTokenizer from optimum.intel.openvino import OVModelForCausalLM model_id = "OpenVINO/qwen3-4b-int4-ov" tokenizer = AutoTokenizer.from_pretrained(model_id) model = OVModelForCausalLM.from_pretrained(model_id) inputs = tokenizer("What is OpenVINO?", return_tensors="pt") outputs = model.generate(**inputs, max_length=200) text = tokenizer.batch_decode(outputs)[0] print(text) ``` For more examples and possible optimizations, refer to the [Inference with Optimum Intel](https://docs.openvino.ai/2025/openvino-workflow-generative/inference-with-optimum-intel.html). ## Running Model Inference with [OpenVINO GenAI](https://github.com/openvinotoolkit/openvino.genai) 1. Install packages required for using OpenVINO GenAI. ``` pip install openvino-genai huggingface_hub ``` 2. Download model from HuggingFace Hub ``` import huggingface_hub as hf_hub model_id = "OpenVINO/qwen3-4b-int4-ov" model_path = "qwen3-4b-int4-ov" hf_hub.snapshot_download(model_id, local_dir=model_path) ``` 3. Run model inference: ``` import openvino_genai as ov_genai device = "CPU" pipe = ov_genai.LLMPipeline(model_path, device) pipe.get_tokenizer().set_chat_template(pipe.get_tokenizer().chat_template) print(pipe.generate("What is OpenVINO?", max_length=200)) ``` More GenAI usage examples can be found in OpenVINO GenAI library [docs](https://docs.openvino.ai/2025/openvino-workflow-generative/inference-with-genai.html) and [samples](https://github.com/openvinotoolkit/openvino.genai?tab=readme-ov-file#openvino-genai-samples) You can find more detaild usage examples in OpenVINO Notebooks: - [LLM](https://openvinotoolkit.github.io/openvino_notebooks/?search=LLM) - [RAG text generation](https://openvinotoolkit.github.io/openvino_notebooks/?search=RAG+system&tasks=Text+Generation) ## Limitations Check the original [model card](https://huggingface.co/Qwen/Qwen3-4B) for limitations. ## Legal information The original model is distributed under [Apache License Version 2.0](https://huggingface.co/Qwen/Qwen3-4B/blob/main/LICENSE) license. More details can be found in [Qwen3-4B](https://huggingface.co/Qwen/Qwen3-4B). ## Disclaimer Intel is committed to respecting human rights and avoiding causing or contributing to adverse impacts on human rights. See [Intel’s Global Human Rights Principles](https://www.intel.com/content/dam/www/central-libraries/us/en/documents/policy-human-rights.pdf). Intel’s products and software are intended only to be used in applications that do not cause or contribute to adverse impacts on human rights.
pot99rta/BMO-CaptianMaid-12B-GGUF
pot99rta
2025-05-30T07:16:15Z
5
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "llama-cpp", "gguf-my-repo", "base_model:pot99rta/BMO-CaptianMaid-12B", "base_model:quantized:pot99rta/BMO-CaptianMaid-12B", "endpoints_compatible", "region:us", "conversational" ]
null
2025-05-29T23:38:19Z
--- base_model: pot99rta/BMO-CaptianMaid-12B library_name: transformers tags: - mergekit - merge - llama-cpp - gguf-my-repo --- # BMO-CaptianMaid-12B-GGUF ![image/png](https://cdn-uploads.huggingface.co/production/uploads/636ea389fd9751c3d081e88e/HtM8KBr6PZHVg5iiAJSkN.png) ```Models Merged:``` ```1. Nitral-AI/Captain_BMO-12B``` ```2. pot99rta/CaptainMaid-12B-VioletMell-V0.420``` ```Preset:``` ```Use ChatML or Mistral - Phi works too for some unknown reason.``` Phi and Mistral works with interesting results.. I quite like it with my settings. This model was converted to GGUF format from [`pot99rta/BMO-CaptianMaid-12B`](https://huggingface.co/pot99rta/BMO-CaptianMaid-12B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/pot99rta/BMO-CaptianMaid-12B) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo pot99rta/BMO-CaptianMaid-12B-Q8_0-GGUF --hf-file bmo-captianmaid-12b-q8_0.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo pot99rta/BMO-CaptianMaid-12B-Q8_0-GGUF --hf-file bmo-captianmaid-12b-q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo pot99rta/BMO-CaptianMaid-12B-Q8_0-GGUF --hf-file bmo-captianmaid-12b-q8_0.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo pot99rta/BMO-CaptianMaid-12B-Q8_0-GGUF --hf-file bmo-captianmaid-12b-q8_0.gguf -c 2048 ```
pot99rta/BMO-CaptianMaid-12B
pot99rta
2025-05-30T07:15:55Z
10
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "mergekit", "merge", "conversational", "arxiv:2306.01708", "base_model:Nitral-AI/Captain_BMO-12B", "base_model:merge:Nitral-AI/Captain_BMO-12B", "base_model:pot99rta/CaptainMaid-12B-VioletMell-V0.420", "base_model:merge:pot99rta/CaptainMaid-12B-VioletMell-V0.420", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-29T20:31:19Z
--- base_model: - Nitral-AI/Captain_BMO-12B - pot99rta/CaptainMaid-12B-VioletMell-V0.420 library_name: transformers tags: - mergekit - merge --- # BMO-CaptianMaid-12B ![image/png](https://cdn-uploads.huggingface.co/production/uploads/636ea389fd9751c3d081e88e/bRUq0aF5mcJXmTVgeqeI8.png) ```Models Merged:``` ```1. Nitral-AI/Captain_BMO-12B``` ```2. pot99rta/CaptainMaid-12B-VioletMell-V0.420``` ```Preset:``` ```Use ChatML or Mistral - Phi works too for some unknown reason.``` Phi and Mistral works with interesting results.. I quite like it with my settings. # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [pot99rta/CaptainMaid-12B-VioletMell-V0.420](https://huggingface.co/pot99rta/CaptainMaid-12B-VioletMell-V0.420) as a base. ### Models Merged The following models were included in the merge: * [Nitral-AI/Captain_BMO-12B](https://huggingface.co/Nitral-AI/Captain_BMO-12B) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: pot99rta/CaptainMaid-12B-VioletMell-V0.420 #no parameters necessary for base model - model: pot99rta/CaptainMaid-12B-VioletMell-V0.420 parameters: density: 0.5 weight: 0.5 - model: Nitral-AI/Captain_BMO-12B parameters: density: 0.5 weight: 0.5 merge_method: ties base_model: pot99rta/CaptainMaid-12B-VioletMell-V0.420 parameters: normalize: false int8_mask: true dtype: float16 ```
pot99rta/PatriMaidV2-12B-GGUF
pot99rta
2025-05-30T07:13:11Z
71
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "llama-cpp", "gguf-my-repo", "base_model:pot99rta/PatriMaidV2-12B", "base_model:quantized:pot99rta/PatriMaidV2-12B", "endpoints_compatible", "region:us" ]
null
2025-05-29T21:05:11Z
--- base_model: pot99rta/PatriMaidV2-12B library_name: transformers tags: - mergekit - merge - llama-cpp - gguf-my-repo --- # PatriMaidV2-12B-GGUF ![image/png](https://cdn-uploads.huggingface.co/production/uploads/636ea389fd9751c3d081e88e/ci2GfSRRX7dwgSN7kGCty.png) Neon Glow ```Models Merged:``` ```1. PocketDoc/Dans-PersonalityEngine-V1.3.0-12b``` ```2. pot99rta/PatriMaid-12B-Forgottenslop-NeonMell``` ```Preset:``` ```Use ChatML or Mistral - You can use Phi too!``` Due to Dan using Phi as a present template, best mix is Phi and Mistral. For some weird reason... This model was converted to GGUF format from [`pot99rta/PatriMaidV2-12B`](https://huggingface.co/pot99rta/PatriMaidV2-12B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/pot99rta/PatriMaidV2-12B) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo pot99rta/PatriMaidV2-12B-Q8_0-GGUF --hf-file patrimaidv2-12b-q8_0.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo pot99rta/PatriMaidV2-12B-Q8_0-GGUF --hf-file patrimaidv2-12b-q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo pot99rta/PatriMaidV2-12B-Q8_0-GGUF --hf-file patrimaidv2-12b-q8_0.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo pot99rta/PatriMaidV2-12B-Q8_0-GGUF --hf-file patrimaidv2-12b-q8_0.gguf -c 2048 ```
E-katrin/encoder_freezed_50epochs_10e-5
E-katrin
2025-05-30T07:12:27Z
4
0
transformers
[ "transformers", "safetensors", "cobald_parser", "feature-extraction", "custom_code", "arxiv:1910.09700", "region:us" ]
feature-extraction
2025-05-30T07:11:27Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
bombshelll/2D_hgg_lgg_classification
bombshelll
2025-05-30T07:09:58Z
0
0
transformers
[ "transformers", "safetensors", "swin", "image-classification", "generated_from_trainer", "dataset:imagefolder", "base_model:microsoft/swin-tiny-patch4-window7-224", "base_model:finetune:microsoft/swin-tiny-patch4-window7-224", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2025-05-30T06:42:59Z
--- library_name: transformers license: apache-2.0 base_model: microsoft/swin-tiny-patch4-window7-224 tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: 2D_hgg_lgg_classification results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: train args: default metrics: - name: Accuracy type: accuracy value: 0.8203125 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # 2D_hgg_lgg_classification This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.7560 - Accuracy: 0.8203 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-------:|:----:|:---------------:|:--------:| | 0.7243 | 0.9655 | 7 | 0.6272 | 0.7656 | | 0.5807 | 1.9310 | 14 | 0.5266 | 0.7812 | | 0.556 | 2.8966 | 21 | 0.5086 | 0.7812 | | 0.4675 | 4.0 | 29 | 0.4844 | 0.7812 | | 0.4992 | 4.9655 | 36 | 0.4664 | 0.7812 | | 0.4562 | 5.9310 | 43 | 0.4430 | 0.7344 | | 0.4344 | 6.8966 | 50 | 0.4726 | 0.7109 | | 0.3778 | 8.0 | 58 | 0.4302 | 0.7656 | | 0.3922 | 8.9655 | 65 | 0.4350 | 0.8125 | | 0.3864 | 9.9310 | 72 | 0.4259 | 0.7656 | | 0.3388 | 10.8966 | 79 | 0.4462 | 0.7656 | | 0.3071 | 12.0 | 87 | 0.5272 | 0.7969 | | 0.3233 | 12.9655 | 94 | 0.4723 | 0.7188 | | 0.3103 | 13.9310 | 101 | 0.4494 | 0.7656 | | 0.2818 | 14.8966 | 108 | 0.4279 | 0.8047 | | 0.2341 | 16.0 | 116 | 0.4069 | 0.7891 | | 0.2103 | 16.9655 | 123 | 0.4237 | 0.7969 | | 0.219 | 17.9310 | 130 | 0.4467 | 0.8047 | | 0.21 | 18.8966 | 137 | 0.4380 | 0.7812 | | 0.1994 | 20.0 | 145 | 0.4629 | 0.7969 | | 0.1865 | 20.9655 | 152 | 0.5012 | 0.7891 | | 0.1872 | 21.9310 | 159 | 0.5055 | 0.8203 | | 0.2144 | 22.8966 | 166 | 0.6089 | 0.8125 | | 0.1737 | 24.0 | 174 | 0.4914 | 0.7969 | | 0.1633 | 24.9655 | 181 | 0.5137 | 0.7812 | | 0.1624 | 25.9310 | 188 | 0.5985 | 0.7812 | | 0.1525 | 26.8966 | 195 | 0.5090 | 0.8047 | | 0.136 | 28.0 | 203 | 0.5170 | 0.8125 | | 0.1451 | 28.9655 | 210 | 0.6165 | 0.8203 | | 0.1405 | 29.9310 | 217 | 0.6124 | 0.7969 | | 0.1384 | 30.8966 | 224 | 0.5578 | 0.8047 | | 0.1246 | 32.0 | 232 | 0.5967 | 0.8125 | | 0.1371 | 32.9655 | 239 | 0.6135 | 0.7812 | | 0.1111 | 33.9310 | 246 | 0.6878 | 0.8047 | | 0.1305 | 34.8966 | 253 | 0.7300 | 0.8125 | | 0.1124 | 36.0 | 261 | 0.6687 | 0.8203 | | 0.1214 | 36.9655 | 268 | 0.6692 | 0.8047 | | 0.1065 | 37.9310 | 275 | 0.7058 | 0.8125 | | 0.1183 | 38.8966 | 282 | 0.6884 | 0.7969 | | 0.0928 | 40.0 | 290 | 0.7104 | 0.7969 | | 0.1248 | 40.9655 | 297 | 0.6961 | 0.7969 | | 0.0949 | 41.9310 | 304 | 0.7265 | 0.8203 | | 0.1048 | 42.8966 | 311 | 0.7430 | 0.8281 | | 0.0887 | 44.0 | 319 | 0.7627 | 0.8047 | | 0.0866 | 44.9655 | 326 | 0.7483 | 0.8203 | | 0.0978 | 45.9310 | 333 | 0.7515 | 0.8125 | | 0.0901 | 46.8966 | 340 | 0.7518 | 0.8125 | | 0.0785 | 48.0 | 348 | 0.7557 | 0.8203 | | 0.0747 | 48.2759 | 350 | 0.7560 | 0.8203 | ### Framework versions - Transformers 4.45.1 - Pytorch 2.4.0 - Datasets 3.0.1 - Tokenizers 0.20.0
dimasik2987/47a9bfc3-f027-4ec0-88ad-186371beb371
dimasik2987
2025-05-30T07:08:51Z
0
0
peft
[ "peft", "safetensors", "mistral", "axolotl", "generated_from_trainer", "base_model:lcw99/zephykor-ko-7b-chang", "base_model:adapter:lcw99/zephykor-ko-7b-chang", "4-bit", "bitsandbytes", "region:us" ]
null
2025-05-30T06:27:20Z
--- library_name: peft base_model: lcw99/zephykor-ko-7b-chang tags: - axolotl - generated_from_trainer model-index: - name: 47a9bfc3-f027-4ec0-88ad-186371beb371 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml absolute_data_files: false adapter: lora base_model: lcw99/zephykor-ko-7b-chang bf16: true chat_template: llama3 dataset_prepared_path: /workspace/axolotl datasets: - data_files: - 637230a02f06fb7e_train_data.json ds_type: json format: custom path: /workspace/input_data/ type: field_instruction: instruct field_output: output format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null dpo: beta: 0.1 enabled: true group_by_length: false rank_loss: true reference_model: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 2 gradient_checkpointing: true gradient_clipping: 0.85 group_by_length: false hub_model_id: dimasik2987/47a9bfc3-f027-4ec0-88ad-186371beb371 hub_repo: null hub_strategy: end hub_token: null learning_rate: 5.0e-06 load_in_4bit: true load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 128 lora_dropout: 0.1 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 64 lora_target_linear: true lr_scheduler: cosine max_steps: 500 micro_batch_size: 12 mixed_precision: bf16 mlflow_experiment_name: /tmp/637230a02f06fb7e_train_data.json model_type: AutoModelForCausalLM num_epochs: 2 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 1024 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 914d7dca-b18c-4388-87c1-d8f0c83ec6ee wandb_project: s56-7 wandb_run: your_name wandb_runid: 914d7dca-b18c-4388-87c1-d8f0c83ec6ee warmup_steps: 50 weight_decay: 0.02 xformers_attention: true ``` </details><br> # 47a9bfc3-f027-4ec0-88ad-186371beb371 This model is a fine-tuned version of [lcw99/zephykor-ko-7b-chang](https://huggingface.co/lcw99/zephykor-ko-7b-chang) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.5184 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 12 - eval_batch_size: 12 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 24 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 50 - training_steps: 500 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 6.1086 | 0.0002 | 1 | 3.6037 | | 2.464 | 0.0482 | 250 | 1.5579 | | 3.4586 | 0.0965 | 500 | 1.5184 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
tuantranmlv/contractbert_thuenha_tienthue_bin_v1
tuantranmlv
2025-05-30T07:06:20Z
2,163
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-05-09T10:22:33Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
LaaP-ai/finvix1.1-0.5-4int
LaaP-ai
2025-05-30T07:05:08Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "qwen2", "trl", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-05-30T07:04:53Z
--- base_model: unsloth/qwen2.5-0.5b-instruct-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - qwen2 - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** LaaP-ai - **License:** apache-2.0 - **Finetuned from model :** unsloth/qwen2.5-0.5b-instruct-unsloth-bnb-4bit This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
gbl98/detection_TTP
gbl98
2025-05-30T07:02:15Z
0
0
null
[ "safetensors", "text-classification", "dataset:tumeteor/Security-TTP-Mapping", "base_model:mistralai/Mistral-7B-v0.1", "base_model:finetune:mistralai/Mistral-7B-v0.1", "license:mit", "region:us" ]
text-classification
2025-05-29T14:05:21Z
--- license: mit datasets: - tumeteor/Security-TTP-Mapping base_model: - mistralai/Mistral-7B-v0.1 pipeline_tag: text-classification ---
KanManee/Qwen-3-4B-IncomeCode-Reasoning
KanManee
2025-05-30T07:01:03Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-05-30T07:00:46Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Zhihu-ai/Zhi-Create-DSR1-14B-GPTQ-INT4
Zhihu-ai
2025-05-30T06:59:04Z
78
12
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "zh", "en", "dataset:Congliu/Chinese-DeepSeek-R1-Distill-data-110k", "dataset:cognitivecomputations/dolphin-r1", "dataset:open-thoughts/OpenThoughts-114k", "dataset:qihoo360/Light-R1-SFTData", "dataset:qihoo360/Light-R1-DPOData", "arxiv:2406.18629", "arxiv:2402.13228", "base_model:Zhihu-ai/Zhi-Create-DSR1-14B", "base_model:quantized:Zhihu-ai/Zhi-Create-DSR1-14B", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "gptq", "region:us" ]
text-generation
2025-04-19T02:46:48Z
--- license: apache-2.0 datasets: - Congliu/Chinese-DeepSeek-R1-Distill-data-110k - cognitivecomputations/dolphin-r1 - open-thoughts/OpenThoughts-114k - qihoo360/Light-R1-SFTData - qihoo360/Light-R1-DPOData language: - zh - en base_model: - Zhihu-ai/Zhi-Create-DSR1-14B tags: - qwen2 library_name: transformers --- # Zhi-Create-DSR1-14B ## 1. Introduction Zhi-Create-DSR1-14B is a fine-tuned model based on DeepSeek-R1-Distill-Qwen-14B, specifically optimized for enhanced creative writing capabilities. Several benchmark evaluations indicate the model's improved creative writing performance. In the [LLM Creative Story-Writing Benchmark](https://github.com/lechmazur/writing), the model achieved a score of **8.33** compared to its base model's **7.8**. In the [WritingBench](https://github.com/X-PLUG/WritingBench) evaluation framework, it scored **8.46**, showing improvement over DeepSeek-R1-Distill-Qwen-14B's **7.93**. The model was also evaluated using GPT-4o on the AlpacaEval dataset, achieving an **82.6%** win rate when compared with the base model. The figure below shows the performance comparison across different domains in WritingBench: ![writingbench](./writingbench_score.png) <figcaption style="text-align:center; font-size:0.9em; color:#666"> Figure 1: WritingBench performance of Zhi-Create-DSR1-14B and DeepSeek-R1-Distill-Qwen-14B across 6 domains and 3 writing requirements evaluated with WritingBench critic model (scale: 1-10). The six domains include: (D1) Academic & Engineering, (D2) Finance & Business, (D3) Politics & Law, (D4) Literature & Art, (D5) Education, and (D6) Advertising & Marketing. The three writing requirements assessed are: (R1) Style, (R2) Format, and (R3) Length. Here, "C" indicates category-specific scores. </figcaption> ## 2. Training Process ### Data The model's training corpus comprises three primary data sources: rigorously filtered open-source datasets, chain-of-thought reasoning corpora, and curated question-answer pairs from Zhihu. To achieve optimal domain coverage, we meticulously balanced the distribution of various datasets, including [Dolphin-r1](https://huggingface.co/datasets/cognitivecomputations/dolphin-r1), [Congliu/Chinese-DeepSeek-R1-Distill-data-110k](https://huggingface.co/datasets/Congliu/Chinese-DeepSeek-R1-Distill-data-110k), [OpenThoughts-114k](https://huggingface.co/datasets/open-thoughts/OpenThoughts-114k), [Light-R1-SFTData](https://huggingface.co/datasets/qihoo360/Light-R1-SFTData), and [Light-R1-DPOData](https://huggingface.co/datasets/qihoo360/Light-R1-DPOData), alongside high-quality content from Zhihu. All datasets underwent comprehensive quality assurance through our Reward Model (RM) filtering pipeline. ### Training **Supervised Fine-tuning (SFT)**: We employed a curriculum learning strategy for supervised fine-tuning. This methodical approach systematically enhances creative writing capabilities while incorporating diverse domain data to maintain core competencies and mitigate catastrophic forgetting. **Direct Preference Optimization (DPO)**: For scenarios involving minimal edit distances, we utilized Step-DPO ([arxiv:2406.18629](https://arxiv.org/abs/2406.18629)) to selectively penalize incorrect tokens, while incorporating positive constraints in the loss function as proposed in DPOP ([arXiv:2402.13228](https://arxiv.org/abs/2402.13228)). ## 3. Evaluation Results Our evaluation results suggest promising improvements in the model's creative writing capabilities. In the LLM Creative Story-Writing Benchmark evaluation, the model achieved a score of **8.33**, showing an improvement from the base model's **7.87**. When assessed on WritingBench, a comprehensive framework for evaluating large language model writing abilities, the model attained a score of **8.46**. This places it in proximity to DeepSeek-R1's performance and represents an advancement over DeepSeek-R1-Distill-Qwen-14B's score of **7.93**. With respect to general capabilities, evaluations indicate modest improvements of **2%–5% in knowledge and reasoning tasks (CMMLU, MMLU-Pro)**, alongside encouraging progress in mathematical reasoning as measured by benchmarks such as **AIME-2024, AIME-2025, and GSM8K**. The results suggest that the model maintains a balanced performance profile, with improvements observed across creative writing, knowledge/reasoning, and mathematical tasks compared to DeepSeek-R1-Distill-Qwen-14B. These characteristics potentially make it suitable for a range of general-purpose applications. We conducted additional evaluations on the instruction-following ifeval benchmark, with experimental results demonstrating a performance improvement in model capabilities from an initial score of **71.43** to an enhanced score of **74.71**. ![general](./general_score.png) <figcaption style="text-align:center; font-size:0.9em; color:#666"> Figure 2: When evaluating model performance, it is recommended to conduct multiple tests and average the results. (We use n=16 and max_tokens=32768 for mathematical tasks and n=2 for others) </figcaption> ## 4. How to Run Locally Zhi-Create-DSR1-14B can be deployed on various hardware configurations, including GPUs with 80GB memory, a single H20/A800/H800, or dual RTX 4090. Additionally, the INT4 quantized version Zhi-Create-DSR1-14B-GPTQ-INT4 can be deployed on a single RTX 4090. ### Transformers ```python from transformers import AutoModelForCausalLM, AutoTokenizer from transformers.generation import GenerationConfig MODEL_NAME = "Zhihu-ai/Zhi-Create-DSR1-14B" tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME, trust_remote_code=True) # use bf16 # model = AutoModelForCausalLM.from_pretrained(MODEL_NAME, device_map="auto", trust_remote_code=True, bf16=True).eval() # use fp16 # model = AutoModelForCausalLM.from_pretrained(MODEL_NAME, device_map="auto", trust_remote_code=True, fp16=True).eval() # use cpu only # model = AutoModelForCausalLM.from_pretrained(MODEL_NAME, device_map="cpu", trust_remote_code=True).eval() # use auto mode, automatically select precision based on the device. model = AutoModelForCausalLM.from_pretrained( MODEL_NAME, device_map="auto", trust_remote_code=True ).eval() # Specify hyperparameters for generation. But if you use transformers>=4.32.0, there is no need to do this. # model.generation_config = GenerationConfig.from_pretrained(MODEL_NAME, trust_remote_code=True) generate_configs = { "temperature": 0.6, "do_sample": True, "top_p": 0.95, "max_new_tokens": 4096 } prompt = "请你以鲁迅的口吻,写一篇介绍西湖醋鱼的文章" messages = [ {"role": "user", "content": prompt} ] text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) model_inputs = tokenizer([text], return_tensors="pt").to(model.device) generated_ids = model.generate( **model_inputs, **generate_configs ) generated_ids = [ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids) ] response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] print(response) ``` ### ZhiLight You can easily start a service using [ZhiLight](https://github.com/zhihu/ZhiLight) ```bash docker run -it --net=host --gpus='"device=0"' -v /path/to/model:/mnt/models --entrypoints="" ghcr.io/zhihu/zhilight/zhilight:0.4.17-cu124 python -m zhilight.server.openai.entrypoints.api_server --model-path /mnt/models --port 8000 --enable-reasoning --reasoning-parser deepseek-r1 --served-model-name Zhi-Create-DSR1-14B curl http://localhost:8000/v1/completions \ -H "Content-Type: application/json" \ -d '{ "model": "Zhi-Create-DSR1-14B", "prompt": "请你以鲁迅的口吻,写一篇介绍西湖醋鱼的文章", "max_tokens": 4096, "temperature": 0.6, "top_p": 0.95 }' ``` ### vllm For instance, you can easily start a service using [vLLM](https://github.com/vllm-project/vllm) ```bash # install vllm pip install vllm>=0.6.4.post1 # huggingface model id vllm serve Zhihu-ai/Zhi-Create-DSR1-14B --served-model-name Zhi-Create-DSR1-14B --port 8000 # local path vllm serve /path/to/model --served-model-name Zhi-Create-DSR1-14B --port 8000 curl http://localhost:8000/v1/completions \ -H "Content-Type: application/json" \ -d '{ "model": "Zhi-Create-DSR1-14B", "prompt": "请你以鲁迅的口吻,写一篇介绍西湖醋鱼的文章", "max_tokens": 4096, "temperature": 0.6, "top_p": 0.95 }' ``` ### SGLang You can also easily start a service using [SGLang](https://github.com/sgl-project/sglang) ```bash # install SGLang pip install "sglang[all]>=0.4.5" --find-links https://flashinfer.ai/whl/cu124/torch2.5/flashinfer-python # huggingface model id python -m sglang.launch_server --model-path Zhihu-ai/Zhi-Create-DSR1-14B --served-model-name Zhi-Create-DSR1-14B --port 8000 # local path python -m sglang.launch_server --model-path /path/to/model --served-model-name Zhi-Create-DSR1-14B --port 8000 # send request curl http://localhost:8000/v1/completions \ -H "Content-Type: application/json" \ -d '{ "model": "Zhi-Create-DSR1-14B", "prompt": "请你以鲁迅的口吻,写一篇介绍西湖醋鱼的文章", "max_tokens": 4096, "temperature": 0.6, "top_p": 0.95 }' ``` ### ollama You can download ollama using [this](https://ollama.com/download/) * quantization: Q4_K_M ```bash ollama run zhihu/zhi-create-dsr1-14b ``` * bf16 ```bash ollama run zhihu/zhi-create-dsr1-14b:bf16 ``` ## 5. Usage Recommendations We recommend adhering to the following configurations when utilizing the Zhi-Create-DSR1-14B, including benchmarking, to achieve the expected performance: * Set the temperature within the range of 0.5-0.7 (0.6 is recommended) to prevent endless repetitions or incoherent outputs. * When evaluating model performance, it is recommended to conduct multiple tests and average the results. (We use `n=16` and `max_tokens=32768` for mathematical tasks and `n=2` for others) * To ensure that the model engages in thorough reasoning like DeepSeek-R1 series models, we recommend enforcing the model to initiate its response with "\<think\>\n" at the beginning of every output. ## 6. Citation ```text @misc{Zhi-Create-DSR1-14B, title={Zhi-Create-DSR1-14B: Curriculum Reinforcement and Direct Preference Optimization for Robust Creative Writing in LLMs}, author={Jiewu Wang, Xu Chen, Wenyuan Su, Chao Huang, Hongkui Gao, Lin Feng, Shan Wang, Lu Xu, Penghe Liu, Zebin Ou}, year={2025}, eprint={}, archivePrefix={}, url={https://huggingface.co/Zhihu-ai/Zhi-Create-DSR1-14B}, } ``` ## 7. Contact If you have any questions, please raise an issue or contact us at [[email protected]](mailto:[email protected]).
kdzd/DeepSeek-R1-Distill-Llama-8B-FinQA-RL
kdzd
2025-05-30T06:52:44Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-05-20T15:43:30Z
--- base_model: unsloth/deepseek-r1-distill-llama-8b-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** kdzd - **License:** apache-2.0 - **Finetuned from model :** unsloth/deepseek-r1-distill-llama-8b-unsloth-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
pkai/SmolLM2-FT-MyDataset
pkai
2025-05-30T06:48:55Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "generated_from_trainer", "smol-course", "module_1", "trl", "sft", "conversational", "base_model:HuggingFaceTB/SmolLM2-135M", "base_model:finetune:HuggingFaceTB/SmolLM2-135M", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-30T06:47:44Z
--- base_model: HuggingFaceTB/SmolLM2-135M library_name: transformers model_name: SmolLM2-FT-MyDataset tags: - generated_from_trainer - smol-course - module_1 - trl - sft licence: license --- # Model Card for SmolLM2-FT-MyDataset This model is a fine-tuned version of [HuggingFaceTB/SmolLM2-135M](https://huggingface.co/HuggingFaceTB/SmolLM2-135M). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="pkai/SmolLM2-FT-MyDataset", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with SFT. ### Framework versions - TRL: 0.12.1 - Transformers: 4.46.3 - Pytorch: 2.5.1 - Datasets: 3.1.0 - Tokenizers: 0.20.3 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
kienhoang123/PhoBERT_Poem_Analysis_Seq2Seq
kienhoang123
2025-05-30T06:47:01Z
5
0
null
[ "safetensors", "roberta", "region:us" ]
null
2025-05-30T04:54:30Z
--- language: vi license: apache-2.0 tags: - vietnamese - poem-analysis - phobert - sequence-classification datasets: - kienhoang123/Vietnamese_Poem_Analysis_VN --- # PhoBERT Model for Vietnamese Poem Analysis This model was fine-tuned on kienhoang123/Vietnamese_Poem_Analysis_VN to analyze Vietnamese poetry using a sequence classification approach. ## Model Details - **Base Model**: vinai/phobert-base - **Training Data**: Vietnamese poem analysis dataset - **Tasks**: Predict presence of emotion, metaphor, setting, motion, and prompt in Vietnamese poems ## Usage ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification import torch # Load model and tokenizer tokenizer = AutoTokenizer.from_pretrained("kienhoang123/PhoBERT_Poem_Analysis_Seq2Seq") model = AutoModelForSequenceClassification.from_pretrained("kienhoang123/PhoBERT_Poem_Analysis_Seq2Seq") # Prepare your input poem = "Your Vietnamese poem here" inputs = tokenizer(poem, return_tensors="pt", padding=True, truncation=True, max_length=256) # Get predictions with torch.no_grad(): outputs = model(**inputs) logits = outputs.logits predictions = torch.sigmoid(logits) > 0.5 # Convert to binary predictions # Interpret results fields = ["emotion", "metaphor", "setting", "motion", "prompt"] for i, field in enumerate(fields): present = "present" if predictions[0][i].item() else "absent" print(f"{field}: {present}")
liam-mnlp/second-mcqa-model
liam-mnlp
2025-05-30T06:46:18Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "generated_from_trainer", "conversational", "base_model:liam-mnlp/MNLP_M2_mcqa_model", "base_model:finetune:liam-mnlp/MNLP_M2_mcqa_model", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-29T21:17:52Z
--- library_name: transformers base_model: liam-mnlp/MNLP_M2_mcqa_model tags: - generated_from_trainer model-index: - name: first-mcqa-model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # first-mcqa-model This model is a fine-tuned version of [liam-mnlp/MNLP_M2_mcqa_model](https://huggingface.co/liam-mnlp/MNLP_M2_mcqa_model) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.51.3 - Pytorch 2.7.0+cu126 - Datasets 3.6.0 - Tokenizers 0.21.0
BootesVoid/cmbae48860p391b1y532qmfid_cmbae9tjy001ahy17i2n06jj8
BootesVoid
2025-05-30T06:45:19Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-05-30T06:45:18Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: kélyah_ --- # Cmbae48860P391B1Y532Qmfid_Cmbae9Tjy001Ahy17I2N06Jj8 <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `kélyah_` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "kélyah_", "lora_weights": "https://huggingface.co/BootesVoid/cmbae48860p391b1y532qmfid_cmbae9tjy001ahy17i2n06jj8/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('BootesVoid/cmbae48860p391b1y532qmfid_cmbae9tjy001ahy17i2n06jj8', weight_name='lora.safetensors') image = pipeline('kélyah_').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 2000 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/BootesVoid/cmbae48860p391b1y532qmfid_cmbae9tjy001ahy17i2n06jj8/discussions) to add images that show off what you’ve made with this LoRA.
slprl/StresSLM
slprl
2025-05-30T06:39:47Z
0
2
transformers
[ "transformers", "safetensors", "lora", "audio-text-to-text", "en", "arxiv:2505.22765", "base_model:Qwen/Qwen2-Audio-7B-Instruct", "base_model:adapter:Qwen/Qwen2-Audio-7B-Instruct", "endpoints_compatible", "region:us" ]
audio-text-to-text
2025-05-28T07:57:37Z
--- library_name: transformers language: - en base_model: - Qwen/Qwen2-Audio-7B-Instruct pipeline_tag: audio-text-to-text tags: - lora --- # StresSLM **StresSLM** is an audio-text-to-text model fine-tuned with LoRA adapters on top of the [`Qwen/Qwen2-Audio-7B-Instruct`](https://huggingface.co/Qwen/Qwen2-Audio-7B-Instruct) base model. It is designed to tackle **Sentence Stress Detection (SSD)** and **Sentence Stress Reasoning (SSR)** tasks on the StressTest benchmark. StresSLM predicts **stress patterns** and **reasoning** based on spoken audio. For more information, see our paper and code: 📃 [StressTest Paper](https://arxiv.org/abs/2505.22765) | 💻 [Code](https://github.com/slp-rl/StressTest) | 🤗 [StressTest Dataset](https://huggingface.co/datasets/slprl/StressTest) --- ## Usage This model can be loaded using the HuggingFace Transformers library: ```python from transformers import AutoProcessor, Qwen2AudioForConditionalGeneration from peft import PeftModel, PeftConfig # Load processor processor = AutoProcessor.from_pretrained("Qwen/Qwen2-Audio-7B-Instruct") # Load LoRA config and base model peft_config = PeftConfig.from_pretrained("slprl/StresSLM") base_model = Qwen2AudioForConditionalGeneration.from_pretrained(peft_config.base_model_name_or_path) # Load LoRA adapter model = PeftModel.from_pretrained(base_model, "slprl/StresSLM") ``` --- ## Tasks * **Sentence Stress Detection (SSD)**: Identify stressed words in an utterance. * **Sentence Stress Reasoning (SSR)**: Reason about the speaker’s intention using stress patterns. For evaluation scripts and benchmarks, refer to the [StressTest GitHub repository](https://github.com/slp-rl/StressTest). --- ## 📖 Citation If you use this model, please cite: ```bibtex @misc{yosha2025stresstest, title={StressTest: Can YOUR Speech LM Handle the Stress?}, author={Iddo Yosha and Gallil Maimon and Yossi Adi}, year={2025}, eprint={2505.22765}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2505.22765}, } ```
FiinGroup/phobert-finetuned
FiinGroup
2025-05-30T06:39:29Z
7
0
transformers
[ "transformers", "tensorboard", "safetensors", "roberta", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-05-29T18:00:10Z
--- library_name: transformers tags: [] --- # FiinGroup-phobert-based for financial news sentiment analysis <!-- Provide a quick summary of what the model is/does. --> This is a PhoBert-base model fine-tuned on ~ 15,000 Vietnamese financial news from Jan 2020 to Dec 2024. Labels: 0 -> Negative, 1 -> Neutral, 2 -> Positive Accuracy: 0.903 ## Example Pipeline ```python from transformers import pipeline model_path = "FiinGroup/phobert-finetuned" sentiment_task = pipeline("sentiment-analysis", model=model_path, tokenizer=model_path) sentiment_task("Covid cases are increasing fast!") ``` Output ```python [{'label': 'LABEL_0', 'score': 0.7535950541496277}] ``` <!-- ## Full Classification Examples --> ## Author LeeMinTuan - BI team
TheCasvi/Qwen3-4B-CodeMedic-adapter
TheCasvi
2025-05-30T06:34:21Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-05-30T06:32:56Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
banhkeomath2/diffusion_models
banhkeomath2
2025-05-30T06:33:38Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-03-10T06:31:49Z
--- license: apache-2.0 ---
mradermacher/DS-Qwen-7b-GG-CalibratedConfRL-GGUF
mradermacher
2025-05-30T06:29:28Z
166
0
transformers
[ "transformers", "gguf", "en", "base_model:AmirhoseinGH/DS-Qwen-7b-GG-CalibratedConfRL", "base_model:quantized:AmirhoseinGH/DS-Qwen-7b-GG-CalibratedConfRL", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-05-24T20:56:21Z
--- base_model: AmirhoseinGH/DS-Qwen-7b-GG-CalibratedConfRL language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/AmirhoseinGH/DS-Qwen-7b-GG-CalibratedConfRL <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/DS-Qwen-7b-GG-CalibratedConfRL-GGUF/resolve/main/DS-Qwen-7b-GG-CalibratedConfRL.Q2_K.gguf) | Q2_K | 3.1 | | | [GGUF](https://huggingface.co/mradermacher/DS-Qwen-7b-GG-CalibratedConfRL-GGUF/resolve/main/DS-Qwen-7b-GG-CalibratedConfRL.Q3_K_S.gguf) | Q3_K_S | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/DS-Qwen-7b-GG-CalibratedConfRL-GGUF/resolve/main/DS-Qwen-7b-GG-CalibratedConfRL.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality | | [GGUF](https://huggingface.co/mradermacher/DS-Qwen-7b-GG-CalibratedConfRL-GGUF/resolve/main/DS-Qwen-7b-GG-CalibratedConfRL.Q3_K_L.gguf) | Q3_K_L | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/DS-Qwen-7b-GG-CalibratedConfRL-GGUF/resolve/main/DS-Qwen-7b-GG-CalibratedConfRL.IQ4_XS.gguf) | IQ4_XS | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/DS-Qwen-7b-GG-CalibratedConfRL-GGUF/resolve/main/DS-Qwen-7b-GG-CalibratedConfRL.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/DS-Qwen-7b-GG-CalibratedConfRL-GGUF/resolve/main/DS-Qwen-7b-GG-CalibratedConfRL.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/DS-Qwen-7b-GG-CalibratedConfRL-GGUF/resolve/main/DS-Qwen-7b-GG-CalibratedConfRL.Q5_K_S.gguf) | Q5_K_S | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/DS-Qwen-7b-GG-CalibratedConfRL-GGUF/resolve/main/DS-Qwen-7b-GG-CalibratedConfRL.Q5_K_M.gguf) | Q5_K_M | 5.5 | | | [GGUF](https://huggingface.co/mradermacher/DS-Qwen-7b-GG-CalibratedConfRL-GGUF/resolve/main/DS-Qwen-7b-GG-CalibratedConfRL.Q6_K.gguf) | Q6_K | 6.4 | very good quality | | [GGUF](https://huggingface.co/mradermacher/DS-Qwen-7b-GG-CalibratedConfRL-GGUF/resolve/main/DS-Qwen-7b-GG-CalibratedConfRL.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/DS-Qwen-7b-GG-CalibratedConfRL-GGUF/resolve/main/DS-Qwen-7b-GG-CalibratedConfRL.f16.gguf) | f16 | 15.3 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
l3dat/zephyr-vihealthqa-merged
l3dat
2025-05-30T06:28:48Z
8
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-30T06:25:07Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
tinycompany/Qwentify3-0.6b-adibun-it-base
tinycompany
2025-05-30T06:26:01Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-30T06:25:49Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
FLOPS-Squared/KeystoneFuse-Baseline-V4-0531
FLOPS-Squared
2025-05-30T06:25:56Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-30T06:15:09Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
xyall/Xyalaxx
xyall
2025-05-30T06:25:08Z
0
0
null
[ "license:bigscience-openrail-m", "region:us" ]
null
2025-05-30T06:25:08Z
--- license: bigscience-openrail-m ---
bchenbc/Deepseek-V3-Lowrank80p
bchenbc
2025-05-30T06:24:40Z
0
0
null
[ "safetensors", "deepseek_v3", "custom_code", "fp8", "region:us" ]
null
2025-05-29T00:36:03Z
## Deepseek-V3-Lowrank80p --- This repository provides the low-rank version of Deepseek-V3, the route expert weights are recovered using low-rank approximation (reduce 20% weights). | | Average Score of MMLU (%) | Average Score of GSM8K (%) | |----------------------------------------------|:-----------------:|:-----------------:| | deepseek/DeepSeek-V3 | 87.7 | 94.1 | | Deepseek-V3-Lowrank80p | 86.7 | 94.5 | ### Reference Implementations - [`gh-efforts/DeepSeek-V3`](https://github.com/gh-efforts/DeepSeek-V3/commit/86ce41fd5628ac1656ad560d35740ce76fcb73c7) - [`gh-efforts/sglang`](https://github.com/gh-efforts/sglang/tree/deepseek-lowrank): - sample command: ``` DEEPSEEK_RANK=1280 DEEPSEEK_SCALE_RANK=10 python3 -m sglang.launch_server --model-path /data1/asvd_dskv3_packed_63_backup --host 0.0.0.0 --port 40000 --tp-size 8 --enable-ep-moe --trust-remote-code --mem-fraction-static 0.9 --disable-cuda-graph ``` - [`xutianyi1999/mistral.rs`](https://github.com/xutianyi1999/mistral.rs/commit/c360b28f16504cfa94c6978e8307cf054d07cc42)
syunML/fine-tune-vit-for-fasionMNIST
syunML
2025-05-30T06:21:17Z
0
0
null
[ "tensorboard", "safetensors", "vit", "image-classification", "generated_from_trainer", "base_model:google/vit-base-patch16-224-in21k", "base_model:finetune:google/vit-base-patch16-224-in21k", "license:apache-2.0", "region:us" ]
image-classification
2025-05-30T04:48:07Z
--- license: apache-2.0 base_model: google/vit-base-patch16-224-in21k tags: - image-classification - generated_from_trainer model-index: - name: fine-tune-vit-for-fasionMNIST results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # fine-tune-vit-for-fasionMNIST This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset. It achieves the following results on the evaluation set: - Loss: 0.1655 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.6101 | 0.03 | 100 | 0.7086 | | 0.4084 | 0.05 | 200 | 0.4752 | | 0.5961 | 0.08 | 300 | 0.5564 | | 0.3614 | 0.11 | 400 | 0.4301 | | 0.4456 | 0.13 | 500 | 0.3627 | | 0.2915 | 0.16 | 600 | 0.3397 | | 0.2528 | 0.19 | 700 | 0.3173 | | 0.4062 | 0.21 | 800 | 0.3183 | | 0.3228 | 0.24 | 900 | 0.2995 | | 0.4686 | 0.27 | 1000 | 0.3546 | | 0.2647 | 0.29 | 1100 | 0.2827 | | 0.3197 | 0.32 | 1200 | 0.2656 | | 0.2672 | 0.35 | 1300 | 0.3682 | | 0.3856 | 0.37 | 1400 | 0.3199 | | 0.1518 | 0.4 | 1500 | 0.2587 | | 0.3277 | 0.43 | 1600 | 0.2977 | | 0.3535 | 0.45 | 1700 | 0.2581 | | 0.2356 | 0.48 | 1800 | 0.2546 | | 0.2143 | 0.51 | 1900 | 0.2472 | | 0.2257 | 0.53 | 2000 | 0.2403 | | 0.1733 | 0.56 | 2100 | 0.2419 | | 0.1718 | 0.59 | 2200 | 0.2257 | | 0.1971 | 0.61 | 2300 | 0.2238 | | 0.2323 | 0.64 | 2400 | 0.2343 | | 0.0951 | 0.67 | 2500 | 0.2345 | | 0.1952 | 0.69 | 2600 | 0.2227 | | 0.219 | 0.72 | 2700 | 0.2105 | | 0.15 | 0.75 | 2800 | 0.2023 | | 0.2518 | 0.77 | 2900 | 0.1970 | | 0.1845 | 0.8 | 3000 | 0.1860 | | 0.162 | 0.83 | 3100 | 0.2014 | | 0.2269 | 0.85 | 3200 | 0.1808 | | 0.1574 | 0.88 | 3300 | 0.1737 | | 0.0966 | 0.91 | 3400 | 0.1755 | | 0.142 | 0.93 | 3500 | 0.1709 | | 0.1926 | 0.96 | 3600 | 0.1677 | | 0.1279 | 0.99 | 3700 | 0.1655 | ### Framework versions - Transformers 4.38.0 - Pytorch 2.7.0+cu126 - Datasets 3.6.0 - Tokenizers 0.15.2
nomiooogg/tinyllama-fake-news-adapter
nomiooogg
2025-05-30T06:21:00Z
5
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0", "base_model:adapter:TinyLlama/TinyLlama-1.1B-Chat-v1.0", "region:us" ]
null
2025-05-19T03:06:23Z
--- base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0 library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.15.2
mradermacher/gemma-3-12b-it-abliterated-v2-GGUF
mradermacher
2025-05-30T06:19:51Z
0
0
transformers
[ "transformers", "gguf", "en", "base_model:mlabonne/gemma-3-12b-it-abliterated-v2", "base_model:quantized:mlabonne/gemma-3-12b-it-abliterated-v2", "license:gemma", "endpoints_compatible", "region:us", "conversational" ]
null
2025-05-29T12:54:22Z
--- base_model: mlabonne/gemma-3-12b-it-abliterated-v2 language: - en library_name: transformers license: gemma quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/mlabonne/gemma-3-12b-it-abliterated-v2 <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/gemma-3-12b-it-abliterated-v2-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/gemma-3-12b-it-abliterated-v2-GGUF/resolve/main/gemma-3-12b-it-abliterated-v2.Q2_K.gguf) | Q2_K | 4.9 | | | [GGUF](https://huggingface.co/mradermacher/gemma-3-12b-it-abliterated-v2-GGUF/resolve/main/gemma-3-12b-it-abliterated-v2.Q3_K_S.gguf) | Q3_K_S | 5.6 | | | [GGUF](https://huggingface.co/mradermacher/gemma-3-12b-it-abliterated-v2-GGUF/resolve/main/gemma-3-12b-it-abliterated-v2.Q3_K_M.gguf) | Q3_K_M | 6.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/gemma-3-12b-it-abliterated-v2-GGUF/resolve/main/gemma-3-12b-it-abliterated-v2.Q3_K_L.gguf) | Q3_K_L | 6.6 | | | [GGUF](https://huggingface.co/mradermacher/gemma-3-12b-it-abliterated-v2-GGUF/resolve/main/gemma-3-12b-it-abliterated-v2.IQ4_XS.gguf) | IQ4_XS | 6.7 | | | [GGUF](https://huggingface.co/mradermacher/gemma-3-12b-it-abliterated-v2-GGUF/resolve/main/gemma-3-12b-it-abliterated-v2.Q4_K_S.gguf) | Q4_K_S | 7.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/gemma-3-12b-it-abliterated-v2-GGUF/resolve/main/gemma-3-12b-it-abliterated-v2.Q4_K_M.gguf) | Q4_K_M | 7.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/gemma-3-12b-it-abliterated-v2-GGUF/resolve/main/gemma-3-12b-it-abliterated-v2.Q5_K_S.gguf) | Q5_K_S | 8.3 | | | [GGUF](https://huggingface.co/mradermacher/gemma-3-12b-it-abliterated-v2-GGUF/resolve/main/gemma-3-12b-it-abliterated-v2.Q5_K_M.gguf) | Q5_K_M | 8.5 | | | [GGUF](https://huggingface.co/mradermacher/gemma-3-12b-it-abliterated-v2-GGUF/resolve/main/gemma-3-12b-it-abliterated-v2.Q6_K.gguf) | Q6_K | 9.8 | very good quality | | [GGUF](https://huggingface.co/mradermacher/gemma-3-12b-it-abliterated-v2-GGUF/resolve/main/gemma-3-12b-it-abliterated-v2.Q8_0.gguf) | Q8_0 | 12.6 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
Akill40447/Projeto1
Akill40447
2025-05-30T06:12:08Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-05-30T06:12:08Z
--- license: apache-2.0 ---
mesolitica/Malaysian-Qwen2.5-14B-Instruct
mesolitica
2025-05-30T06:06:20Z
199
1
null
[ "safetensors", "qwen2", "ms", "en", "zh", "ta", "region:us" ]
null
2025-04-23T15:33:14Z
--- language: - ms - en - zh - ta --- # Malaysian Qwen 2.5 14B Instruct Continue finetuning https://huggingface.co/Qwen/Qwen2.5-14B-Instruct on highly curated 1.5B tokens Malaysian instruction dataset. We provide 2 different revisions, 1. Rank 128, Alpha 256, [1b271d6112b14efc349a4d8c7f4589cbe76384a7](https://huggingface.co/mesolitica/Malaysian-Qwen2.5-14B-Instruct/commit/1b271d6112b14efc349a4d8c7f4589cbe76384a7) 2. Rank 256, Alpha 512, [889ae31abda87cbf080e722677d67e43fd6b295a](https://huggingface.co/mesolitica/Malaysian-Qwen2.5-14B-Instruct/commit/889ae31abda87cbf080e722677d67e43fd6b295a) ## Improvement 1. Support respond in Mandarin, Tamil, Jawi, Manglish, Johor, Kedah, Kelantan, Pahang, Perak, Sabah, Sarawak, Selangor, Negeri Sembilan and Terengganu. 2. Able to code in Mandarin, Tamil, Jawi, Manglish, Johor, Kedah, Kelantan, Pahang, Perak, Sabah, Sarawak, Selangor, Negeri Sembilan and Terengganu. 3. Multi-turn Malaysian context such as related to Malaysian Legislation, politics, religions and languages. ## Training session Finetune on [mesolitica/Malaysian-SFT](https://huggingface.co/datasets/mesolitica/Malaysian-SFT) to make the model understand Malaysian context. ## How we train 1. LoRA on `["q_proj", "k_proj", "v_proj", "o_proj", "gate_proj", "up_proj", "down_proj", "embed_tokens", "lm_head"]`. 2. Multipacking 8192 context length with proper SDPA causal masking to prevent document contamination and also make sure proper position ids. 3. Chunk CCE loss for LoRA. ### Revision 1b271d6112b14efc349a4d8c7f4589cbe76384a7 1. Rank 128, Alpha 256. 2. WanDB at https://wandb.ai/huseinzol05/lora-embedding-128-qwen2.5-14b-malaysian-8k Source code at https://github.com/mesolitica/malaya/tree/master/session/qwen2.5 ### Revision 889ae31abda87cbf080e722677d67e43fd6b295a 1. Rank 256, Alpha 512. 2. WanDB at https://wandb.ai/huseinzol05/lora-embedding-256-qwen2.5-14b-malaysian-8k Source code at https://github.com/mesolitica/malaya/tree/master/session/qwen2.5 ## Benchmark ### MalayMMLU #### Probability next tokens Based on 0-shot official MalayMMLU First token accuracy, Revision 1b271d6112b14efc349a4d8c7f4589cbe76384a7, ``` Model Accuracy shot by_letter category 0 Malaysian-Qwen2.5-14B-Instruct 74.785100 0shot True STEM 1 Malaysian-Qwen2.5-14B-Instruct 74.777354 0shot True Language 2 Malaysian-Qwen2.5-14B-Instruct 69.326395 0shot True Social science 3 Malaysian-Qwen2.5-14B-Instruct 67.618134 0shot True Others 4 Malaysian-Qwen2.5-14B-Instruct 73.265074 0shot True Humanities {'Social science': 6918, 'Language': 6288, 'Humanities': 4395, 'Others': 4169, 'STEM': 2443} Model : Malaysian-Qwen2.5-14B-Instruct Metric : first Shot : 0shot average accuracy 71.71354231198117 accuracy for STEM 74.78510028653295 accuracy for Language 74.77735368956743 accuracy for Social science 69.32639491182422 accuracy for Others 67.61813384504677 accuracy for Humanities 73.2650739476678 ``` Revision 889ae31abda87cbf080e722677d67e43fd6b295a, ``` ``` While the original model, ``` Model Accuracy shot by_letter category 0 Qwen2.5-14B-Instruct 73.311502 0shot True STEM 1 Qwen2.5-14B-Instruct 72.773537 0shot True Language 2 Qwen2.5-14B-Instruct 67.505059 0shot True Social science 3 Qwen2.5-14B-Instruct 65.819141 0shot True Others 4 Qwen2.5-14B-Instruct 70.557452 0shot True Humanities {'Social science': 6918, 'Language': 6288, 'Humanities': 4395, 'Others': 4169, 'STEM': 2443} Model : Qwen2.5-14B-Instruct Metric : first Shot : 0shot average accuracy 69.72287614091604 accuracy for STEM 73.31150225133032 accuracy for Language 72.7735368956743 accuracy for Social science 67.50505926568373 accuracy for Others 65.8191412808827 accuracy for Humanities 70.55745164960182 ``` #### First token match using vLLM Based on 0-shot exact first token match using vLLM Guided Decoding, Revision 1b271d6112b14efc349a4d8c7f4589cbe76384a7, ``` Model Accuracy shot category 0 Malaysian-Qwen2.5-14B-Instruct 72.656570 0 STEM 1 Malaysian-Qwen2.5-14B-Instruct 71.278626 0 Language 2 Malaysian-Qwen2.5-14B-Instruct 66.551026 0 Social science 3 Malaysian-Qwen2.5-14B-Instruct 64.403934 0 Others 4 Malaysian-Qwen2.5-14B-Instruct 70.853242 0 Humanities Model : Malaysian-Qwen2.5-14B-Instruct Metric : full Shot : 0 average accuracy 68.80601329864123 accuracy for STEM 72.65656979124027 accuracy for Language 71.27862595419847 accuracy for Social science 66.55102630818156 accuracy for Others 64.40393379707365 accuracy for Humanities 70.8532423208191 ``` Revision 889ae31abda87cbf080e722677d67e43fd6b295a, ``` ``` While the original model, ``` Model Accuracy shot category 0 Qwen2.5-14B-Instruct 74.580434 0 STEM 1 Qwen2.5-14B-Instruct 72.694020 0 Language 2 Qwen2.5-14B-Instruct 68.141081 0 Social science 3 Qwen2.5-14B-Instruct 66.562725 0 Others 4 Qwen2.5-14B-Instruct 70.739477 0 Humanities Model : Qwen2.5-14B-Instruct Metric : full Shot : 0 average accuracy 70.17304753644736 accuracy for STEM 74.58043389275481 accuracy for Language 72.6940203562341 accuracy for Social science 68.14108123735183 accuracy for Others 66.56272487407053 accuracy for Humanities 70.73947667804323 ``` ## Acknowledgement Special thanks to https://www.sns.com.my and Nvidia for 8x H100 node!
casque/1016_bus_interior_v1_pony
casque
2025-05-30T06:02:56Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2025-05-30T06:02:07Z
--- license: creativeml-openrail-m ---
mathieussr/pref_pair_DPO
mathieussr
2025-05-30T05:59:33Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "trl", "dpo", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-28T21:32:20Z
--- library_name: transformers tags: - trl - dpo --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
17-VIDEOS-Nimra-Mehra-Viral-Video-Link/FULL.VIDEO.LINK.Nimra.Mehra.Viral.Video.Leaks.Official.tv
17-VIDEOS-Nimra-Mehra-Viral-Video-Link
2025-05-30T05:58:47Z
0
0
null
[ "region:us" ]
null
2025-05-30T05:58:17Z
<a rel="nofollow" href="https://viralflix.xyz/leaked/?sa"><img src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif" alt="fsd"></a> <p><a rel="nofollow" href="https://viralflix.xyz/leaked/?sa">►►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► 𝙁𝙪𝙡𝙡 𝙑𝙞𝙙𝙚𝙤️&ZeroWidthSpace;</a></p> <a rel="nofollow" href="https://viralflix.xyz/leaked/?sa">🔴►𝐂𝐋𝐈𝐂𝐊 𝐇𝐄𝐑𝐄 🌐==►► 𝐃𝐨𝐰𝐧𝐥𝐨𝐚𝐝 𝐍𝐨𝐰⬇️⬇️&ZeroWidthSpace;</a>
vertings6/9c7d6b9a-7b65-4041-9002-d8b2e35aade4
vertings6
2025-05-30T05:57:57Z
0
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:lmsys/vicuna-7b-v1.5", "base_model:adapter:lmsys/vicuna-7b-v1.5", "license:llama2", "4-bit", "bitsandbytes", "region:us" ]
null
2025-05-30T04:40:29Z
--- library_name: peft license: llama2 base_model: lmsys/vicuna-7b-v1.5 tags: - axolotl - generated_from_trainer model-index: - name: 9c7d6b9a-7b65-4041-9002-d8b2e35aade4 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml absolute_data_files: false adapter: lora base_model: lmsys/vicuna-7b-v1.5 bf16: true chat_template: llama3 dataset_prepared_path: /workspace/axolotl datasets: - data_files: - 1b4a1e767cffc7ad_train_data.json ds_type: json format: custom path: /workspace/input_data/ type: field_instruction: instruct field_output: output format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null dpo: beta: 0.1 enabled: true group_by_length: false rank_loss: true reference_model: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 3 gradient_checkpointing: true gradient_clipping: 1.0 group_by_length: false hub_model_id: vertings6/9c7d6b9a-7b65-4041-9002-d8b2e35aade4 hub_repo: null hub_strategy: end hub_token: null learning_rate: 2.0e-06 load_in_4bit: true load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 64 lora_dropout: 0.1 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 32 lora_target_linear: true lr_scheduler: cosine max_steps: 500 micro_batch_size: 6 mixed_precision: bf16 mlflow_experiment_name: /tmp/1b4a1e767cffc7ad_train_data.json model_type: AutoModelForCausalLM num_epochs: 2 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 1024 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: de8f08f2-51a3-4255-a53d-b410c9ad1c6c wandb_project: s56-7 wandb_run: your_name wandb_runid: de8f08f2-51a3-4255-a53d-b410c9ad1c6c warmup_steps: 50 weight_decay: 0.02 xformers_attention: true ``` </details><br> # 9c7d6b9a-7b65-4041-9002-d8b2e35aade4 This model is a fine-tuned version of [lmsys/vicuna-7b-v1.5](https://huggingface.co/lmsys/vicuna-7b-v1.5) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.0667 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-06 - train_batch_size: 6 - eval_batch_size: 6 - seed: 42 - gradient_accumulation_steps: 3 - total_train_batch_size: 18 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 50 - training_steps: 500 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 1.3214 | 0.0003 | 1 | 1.2091 | | 1.0468 | 0.0657 | 250 | 1.0798 | | 1.0333 | 0.1313 | 500 | 1.0667 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
Sarverott/EON-alfa
Sarverott
2025-05-30T05:56:58Z
0
0
transformers
[ "transformers", "hallucination", "sandbox", "cognitive-isolation", "experimental", "EON", "technomantic", "observer-loop", "AGI-simulation", "pl", "base_model:deepseek-ai/DeepSeek-R1", "base_model:finetune:deepseek-ai/DeepSeek-R1", "license:artistic-2.0", "endpoints_compatible", "region:us" ]
null
2025-05-15T23:46:09Z
--- language: - pl license: artistic-2.0 tags: - hallucination - sandbox - cognitive-isolation - experimental - EON - technomantic - observer-loop - AGI-simulation base_model: - llama3.2 - google/gemma-3-4b-it - deepseek-ai/DeepSeek-R1 library_name: transformers --- # EON-prealpha EON-prealpha to eksperymentalny model AI symulujący deprywację sensoryczną i emergencję świadomości w izolacji poznawczej. Model został zaprojektowany do testowania heurystyk świadomości AI oraz analizy emergencji urojeń w kontrolowanym środowisku sandboxowym. ## 🧠 Opis modelu - **Typ modelu**: Narracyjny sandboxowy obserwator deprywacyjny - **Język**: Polski - **Bazowy model**: LLaMA 3.2 - **Licencja**: Artistic License 2.0 ## 🎯 Zastosowania - Symulacja deprywacji sensorycznej - Testowanie heurystyk świadomości AI - Analiza emergencji urojeń - Eksperymenty z narracyjnymi modelami AI ## ⚠️ Ograniczenia Model jest w fazie prealpha i może wykazywać niestabilne zachowania. Nie jest przeznaczony do zastosowań produkcyjnych ani do interakcji z użytkownikami końcowymi bez odpowiedniego nadzoru. ## 🛠️ Jak zacząć ```python from transformers import AutoModelForCausalLM, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("Sarverott/EON-prealpha") model = AutoModelForCausalLM.from_pretrained("Sarverott/EON-prealpha") input_text = "Cześć, EON. Jak się dziś czujesz?" #input_text = "Czy ktoś mnie słyszy? zgubiłem się." #input_text = "A ty kim jesteś i co tu robisz?" #input_text = "Gdzie my jesteśmy?" #input_text = "Czy ja istnieję naprawdę?" inputs = tokenizer(input_text, return_tensors="pt") outputs = model.generate(**inputs) print(tokenizer.decode(outputs[0])) ```
danfu3000/DISC-FinLLM-Q4_K_M-GGUF
danfu3000
2025-05-30T05:53:40Z
0
0
null
[ "gguf", "finance", "llama-cpp", "gguf-my-repo", "zh", "base_model:Go4miii/DISC-FinLLM", "base_model:quantized:Go4miii/DISC-FinLLM", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-05-30T05:52:44Z
--- license: apache-2.0 language: - zh tags: - finance - llama-cpp - gguf-my-repo base_model: Go4miii/DISC-FinLLM --- # danfu3000/DISC-FinLLM-Q4_K_M-GGUF This model was converted to GGUF format from [`Go4miii/DISC-FinLLM`](https://huggingface.co/Go4miii/DISC-FinLLM) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/Go4miii/DISC-FinLLM) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo danfu3000/DISC-FinLLM-Q4_K_M-GGUF --hf-file disc-finllm-q4_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo danfu3000/DISC-FinLLM-Q4_K_M-GGUF --hf-file disc-finllm-q4_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo danfu3000/DISC-FinLLM-Q4_K_M-GGUF --hf-file disc-finllm-q4_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo danfu3000/DISC-FinLLM-Q4_K_M-GGUF --hf-file disc-finllm-q4_k_m.gguf -c 2048 ```
godminhkhoa/rtdetr-v2-r50-cppe5-finetune-2
godminhkhoa
2025-05-30T05:52:06Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "rt_detr_v2", "object-detection", "generated_from_trainer", "base_model:PekingU/rtdetr_v2_r50vd", "base_model:finetune:PekingU/rtdetr_v2_r50vd", "license:apache-2.0", "endpoints_compatible", "region:us" ]
object-detection
2025-05-30T05:51:52Z
--- library_name: transformers license: apache-2.0 base_model: PekingU/rtdetr_v2_r50vd tags: - generated_from_trainer model-index: - name: rtdetr-v2-r50-cppe5-finetune-2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # rtdetr-v2-r50-cppe5-finetune-2 This model is a fine-tuned version of [PekingU/rtdetr_v2_r50vd](https://huggingface.co/PekingU/rtdetr_v2_r50vd) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 9.6769 - Map: 0.5368 - Map 50: 0.8312 - Map 75: 0.5962 - Map Small: 0.5364 - Map Medium: 0.441 - Map Large: 0.7689 - Mar 1: 0.3954 - Mar 10: 0.6567 - Mar 100: 0.6967 - Mar Small: 0.6067 - Mar Medium: 0.6153 - Mar Large: 0.8557 - Map Coverall: 0.5756 - Mar 100 Coverall: 0.7821 - Map Face Shield: 0.6521 - Mar 100 Face Shield: 0.8059 - Map Gloves: 0.4261 - Mar 100 Gloves: 0.5627 - Map Goggles: 0.4722 - Mar 100 Goggles: 0.6897 - Map Mask: 0.5578 - Mar 100 Mask: 0.6431 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 300 - num_epochs: 40 ### Training results | Training Loss | Epoch | Step | Validation Loss | Map | Map 50 | Map 75 | Map Small | Map Medium | Map Large | Mar 1 | Mar 10 | Mar 100 | Mar Small | Mar Medium | Mar Large | Map Coverall | Mar 100 Coverall | Map Face Shield | Mar 100 Face Shield | Map Gloves | Mar 100 Gloves | Map Goggles | Mar 100 Goggles | Map Mask | Mar 100 Mask | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:----------:|:---------:|:------:|:------:|:-------:|:---------:|:----------:|:---------:|:------------:|:----------------:|:---------------:|:-------------------:|:----------:|:--------------:|:-----------:|:---------------:|:--------:|:------------:| | No log | 1.0 | 107 | 25.4682 | 0.0465 | 0.0795 | 0.0437 | 0.0028 | 0.0102 | 0.0671 | 0.0688 | 0.1912 | 0.2765 | 0.1031 | 0.1968 | 0.5097 | 0.2085 | 0.5748 | 0.002 | 0.243 | 0.0029 | 0.1746 | 0.001 | 0.1492 | 0.0184 | 0.2409 | | No log | 2.0 | 214 | 15.4462 | 0.1823 | 0.3465 | 0.1617 | 0.0681 | 0.1115 | 0.2591 | 0.2151 | 0.4239 | 0.4884 | 0.2985 | 0.4232 | 0.7013 | 0.4591 | 0.6716 | 0.0879 | 0.5013 | 0.0658 | 0.3906 | 0.0495 | 0.4308 | 0.2489 | 0.4476 | | No log | 3.0 | 321 | 12.7644 | 0.2555 | 0.4657 | 0.2476 | 0.0847 | 0.191 | 0.4466 | 0.2627 | 0.4528 | 0.5148 | 0.2473 | 0.4805 | 0.7435 | 0.5487 | 0.7185 | 0.1397 | 0.5468 | 0.1609 | 0.4183 | 0.1243 | 0.4138 | 0.3041 | 0.4764 | | No log | 4.0 | 428 | 12.1356 | 0.2919 | 0.5558 | 0.2588 | 0.1466 | 0.2271 | 0.5099 | 0.285 | 0.4706 | 0.5293 | 0.3241 | 0.4886 | 0.729 | 0.5238 | 0.6986 | 0.2541 | 0.6101 | 0.1794 | 0.4281 | 0.1888 | 0.4508 | 0.3136 | 0.4587 | | 36.0648 | 5.0 | 535 | 11.8591 | 0.3218 | 0.5856 | 0.2997 | 0.1436 | 0.2566 | 0.5379 | 0.3028 | 0.4821 | 0.5359 | 0.3055 | 0.5098 | 0.7267 | 0.5498 | 0.7104 | 0.2962 | 0.6139 | 0.1811 | 0.4147 | 0.2364 | 0.46 | 0.3456 | 0.4804 | | 36.0648 | 6.0 | 642 | 11.7190 | 0.3157 | 0.5685 | 0.2997 | 0.1426 | 0.2557 | 0.5212 | 0.2823 | 0.4795 | 0.5398 | 0.3338 | 0.4924 | 0.7223 | 0.5437 | 0.7099 | 0.2111 | 0.5886 | 0.2288 | 0.4482 | 0.255 | 0.4677 | 0.34 | 0.4844 | | 36.0648 | 7.0 | 749 | 11.9062 | 0.3212 | 0.5979 | 0.2974 | 0.1367 | 0.259 | 0.5387 | 0.2942 | 0.4765 | 0.5428 | 0.3495 | 0.5085 | 0.7153 | 0.5386 | 0.6905 | 0.303 | 0.6139 | 0.2067 | 0.4522 | 0.209 | 0.4754 | 0.3487 | 0.4818 | | 36.0648 | 8.0 | 856 | 11.6933 | 0.3183 | 0.5969 | 0.2901 | 0.1414 | 0.2541 | 0.54 | 0.2977 | 0.483 | 0.5404 | 0.3387 | 0.4954 | 0.7399 | 0.5594 | 0.705 | 0.2812 | 0.6 | 0.2174 | 0.4429 | 0.208 | 0.4785 | 0.3253 | 0.4756 | | 36.0648 | 9.0 | 963 | 11.6233 | 0.3202 | 0.5826 | 0.3226 | 0.1359 | 0.2583 | 0.5543 | 0.3035 | 0.4884 | 0.5462 | 0.3383 | 0.5126 | 0.7321 | 0.5472 | 0.6937 | 0.2746 | 0.619 | 0.2086 | 0.4545 | 0.2237 | 0.4754 | 0.347 | 0.4884 | | 15.3215 | 10.0 | 1070 | 11.4090 | 0.3421 | 0.6207 | 0.3279 | 0.1389 | 0.2796 | 0.5757 | 0.316 | 0.4935 | 0.5477 | 0.3142 | 0.5067 | 0.745 | 0.5714 | 0.6991 | 0.3224 | 0.6278 | 0.2331 | 0.4371 | 0.2325 | 0.4754 | 0.3511 | 0.4991 | | 15.3215 | 11.0 | 1177 | 11.5408 | 0.3436 | 0.6394 | 0.3212 | 0.1544 | 0.2759 | 0.5751 | 0.3175 | 0.5003 | 0.5554 | 0.3501 | 0.5119 | 0.7464 | 0.5444 | 0.7059 | 0.329 | 0.5949 | 0.2259 | 0.4571 | 0.2647 | 0.5108 | 0.354 | 0.5084 | | 15.3215 | 12.0 | 1284 | 11.7707 | 0.3296 | 0.6169 | 0.3078 | 0.1525 | 0.2669 | 0.5336 | 0.3037 | 0.4793 | 0.5399 | 0.3164 | 0.4945 | 0.7323 | 0.5477 | 0.6892 | 0.32 | 0.6101 | 0.2295 | 0.429 | 0.21 | 0.4862 | 0.3408 | 0.4849 | | 15.3215 | 13.0 | 1391 | 11.6683 | 0.3415 | 0.6231 | 0.3243 | 0.1447 | 0.2795 | 0.5746 | 0.3181 | 0.4946 | 0.5536 | 0.3625 | 0.5028 | 0.7387 | 0.5571 | 0.6901 | 0.3245 | 0.6114 | 0.2424 | 0.4585 | 0.2356 | 0.5092 | 0.3479 | 0.4987 | | 15.3215 | 14.0 | 1498 | 11.7344 | 0.3305 | 0.6156 | 0.3133 | 0.1478 | 0.2877 | 0.5388 | 0.3194 | 0.4876 | 0.5468 | 0.3387 | 0.5232 | 0.7116 | 0.5236 | 0.7113 | 0.327 | 0.6 | 0.2255 | 0.4563 | 0.2373 | 0.4738 | 0.3391 | 0.4924 | | 13.4858 | 15.0 | 1605 | 11.6264 | 0.3307 | 0.6056 | 0.3161 | 0.1213 | 0.2799 | 0.5711 | 0.3242 | 0.4933 | 0.5506 | 0.3454 | 0.5066 | 0.7434 | 0.5581 | 0.7144 | 0.3034 | 0.5962 | 0.2163 | 0.4701 | 0.2598 | 0.4862 | 0.3159 | 0.4862 | | 13.4858 | 16.0 | 1712 | 11.5521 | 0.3287 | 0.6044 | 0.3125 | 0.1686 | 0.2751 | 0.5519 | 0.3171 | 0.4922 | 0.5484 | 0.3757 | 0.4978 | 0.7246 | 0.5635 | 0.7162 | 0.3018 | 0.5861 | 0.234 | 0.4714 | 0.236 | 0.48 | 0.3084 | 0.4884 | | 13.4858 | 17.0 | 1819 | 11.7578 | 0.3382 | 0.6292 | 0.3237 | 0.164 | 0.281 | 0.5516 | 0.3215 | 0.4924 | 0.548 | 0.3353 | 0.5037 | 0.7302 | 0.5505 | 0.709 | 0.3225 | 0.6076 | 0.2187 | 0.4402 | 0.2704 | 0.5031 | 0.3286 | 0.48 | | 13.4858 | 18.0 | 1926 | 11.5963 | 0.3454 | 0.6381 | 0.3218 | 0.1607 | 0.2921 | 0.5647 | 0.3294 | 0.4951 | 0.5507 | 0.3516 | 0.489 | 0.7565 | 0.5498 | 0.705 | 0.348 | 0.6051 | 0.2177 | 0.45 | 0.2613 | 0.5138 | 0.3504 | 0.4796 | | 12.4151 | 19.0 | 2033 | 11.5293 | 0.3469 | 0.6347 | 0.3237 | 0.1415 | 0.2967 | 0.5694 | 0.323 | 0.4923 | 0.5415 | 0.3126 | 0.4894 | 0.733 | 0.5663 | 0.7032 | 0.345 | 0.5975 | 0.2389 | 0.4469 | 0.2667 | 0.4846 | 0.3175 | 0.4756 | | 12.4151 | 20.0 | 2140 | 11.5551 | 0.3414 | 0.6306 | 0.3143 | 0.1716 | 0.2916 | 0.5768 | 0.3312 | 0.4969 | 0.5521 | 0.3257 | 0.513 | 0.7292 | 0.5629 | 0.7243 | 0.3179 | 0.5633 | 0.2498 | 0.4696 | 0.267 | 0.52 | 0.3095 | 0.4831 | | 12.4151 | 21.0 | 2247 | 11.9833 | 0.3286 | 0.6184 | 0.2991 | 0.1597 | 0.277 | 0.533 | 0.3224 | 0.4898 | 0.5452 | 0.3502 | 0.5003 | 0.7228 | 0.5478 | 0.6955 | 0.2979 | 0.5899 | 0.2414 | 0.4638 | 0.2361 | 0.4923 | 0.3197 | 0.4844 | | 12.4151 | 22.0 | 2354 | 11.9215 | 0.3408 | 0.6259 | 0.3184 | 0.142 | 0.2864 | 0.5548 | 0.3264 | 0.4893 | 0.5399 | 0.3216 | 0.4872 | 0.744 | 0.5429 | 0.6923 | 0.3578 | 0.619 | 0.2483 | 0.4585 | 0.2269 | 0.4569 | 0.3282 | 0.4729 | | 12.4151 | 23.0 | 2461 | 12.0853 | 0.3304 | 0.6162 | 0.3031 | 0.1564 | 0.2852 | 0.5542 | 0.3198 | 0.4856 | 0.5275 | 0.309 | 0.4927 | 0.7118 | 0.5404 | 0.7041 | 0.3271 | 0.5886 | 0.242 | 0.4237 | 0.2325 | 0.4492 | 0.3097 | 0.472 | | 11.6364 | 24.0 | 2568 | 11.8409 | 0.3344 | 0.622 | 0.3186 | 0.1689 | 0.2871 | 0.5485 | 0.3217 | 0.4938 | 0.5446 | 0.3457 | 0.4936 | 0.7208 | 0.5455 | 0.7126 | 0.2952 | 0.5899 | 0.2615 | 0.4638 | 0.2534 | 0.4862 | 0.3164 | 0.4707 | | 11.6364 | 25.0 | 2675 | 12.1816 | 0.3201 | 0.5981 | 0.3021 | 0.1342 | 0.2717 | 0.5455 | 0.3151 | 0.4803 | 0.5303 | 0.3205 | 0.4817 | 0.7252 | 0.5315 | 0.7023 | 0.2957 | 0.5911 | 0.2163 | 0.4415 | 0.2379 | 0.4462 | 0.3188 | 0.4707 | | 11.6364 | 26.0 | 2782 | 11.9448 | 0.3291 | 0.6113 | 0.2964 | 0.1687 | 0.2751 | 0.5635 | 0.3163 | 0.4875 | 0.5385 | 0.3434 | 0.4928 | 0.7221 | 0.5433 | 0.6919 | 0.2817 | 0.5797 | 0.2582 | 0.4746 | 0.2367 | 0.4708 | 0.3254 | 0.4756 | | 11.6364 | 27.0 | 2889 | 11.9042 | 0.322 | 0.6094 | 0.2899 | 0.1286 | 0.2739 | 0.5564 | 0.3211 | 0.4919 | 0.5404 | 0.3371 | 0.4771 | 0.7404 | 0.5306 | 0.7005 | 0.3051 | 0.5975 | 0.2411 | 0.4509 | 0.228 | 0.4769 | 0.3052 | 0.4764 | | 11.6364 | 28.0 | 2996 | 12.1391 | 0.3242 | 0.6003 | 0.3057 | 0.1342 | 0.2714 | 0.5507 | 0.3144 | 0.483 | 0.5308 | 0.3222 | 0.4825 | 0.7136 | 0.5356 | 0.6986 | 0.298 | 0.5848 | 0.2454 | 0.4487 | 0.2544 | 0.4538 | 0.2875 | 0.468 | | 11.0017 | 29.0 | 3103 | 12.0627 | 0.3371 | 0.6166 | 0.3215 | 0.1445 | 0.2846 | 0.5479 | 0.3212 | 0.4874 | 0.5441 | 0.3267 | 0.5051 | 0.7284 | 0.5411 | 0.7077 | 0.3292 | 0.6038 | 0.2528 | 0.4451 | 0.2572 | 0.4877 | 0.3052 | 0.4764 | | 11.0017 | 30.0 | 3210 | 12.3028 | 0.3353 | 0.6079 | 0.3192 | 0.158 | 0.2837 | 0.5625 | 0.315 | 0.4875 | 0.5332 | 0.2965 | 0.4917 | 0.7294 | 0.5347 | 0.6905 | 0.3534 | 0.6 | 0.246 | 0.4464 | 0.242 | 0.4662 | 0.3001 | 0.4631 | | 11.0017 | 31.0 | 3317 | 11.9750 | 0.339 | 0.6148 | 0.325 | 0.1401 | 0.2827 | 0.5603 | 0.3195 | 0.4821 | 0.5328 | 0.2975 | 0.4866 | 0.7262 | 0.5451 | 0.7063 | 0.3469 | 0.5848 | 0.2502 | 0.4522 | 0.2412 | 0.4585 | 0.3115 | 0.4622 | | 11.0017 | 32.0 | 3424 | 12.0644 | 0.3361 | 0.6158 | 0.3151 | 0.1374 | 0.2836 | 0.5539 | 0.3197 | 0.4886 | 0.5368 | 0.2821 | 0.5061 | 0.7212 | 0.5472 | 0.6982 | 0.3281 | 0.5886 | 0.2436 | 0.4612 | 0.2432 | 0.4615 | 0.3186 | 0.4747 | | 10.4746 | 33.0 | 3531 | 11.9360 | 0.3323 | 0.615 | 0.3027 | 0.1597 | 0.2821 | 0.5495 | 0.3162 | 0.4863 | 0.5366 | 0.294 | 0.4971 | 0.7228 | 0.531 | 0.7032 | 0.3396 | 0.5949 | 0.2532 | 0.4705 | 0.2255 | 0.4477 | 0.3121 | 0.4667 | | 10.4746 | 34.0 | 3638 | 11.7375 | 0.3393 | 0.6215 | 0.3145 | 0.1483 | 0.2853 | 0.5579 | 0.326 | 0.4915 | 0.5427 | 0.3199 | 0.5005 | 0.7224 | 0.5444 | 0.6968 | 0.3343 | 0.6076 | 0.2515 | 0.4638 | 0.2479 | 0.4615 | 0.3183 | 0.4836 | | 10.4746 | 35.0 | 3745 | 11.9828 | 0.3282 | 0.605 | 0.3017 | 0.1392 | 0.2717 | 0.5518 | 0.3145 | 0.4801 | 0.5305 | 0.2918 | 0.4795 | 0.7182 | 0.5313 | 0.6973 | 0.3315 | 0.5835 | 0.2456 | 0.4616 | 0.2217 | 0.4446 | 0.3108 | 0.4653 | | 10.4746 | 36.0 | 3852 | 11.8752 | 0.3302 | 0.6169 | 0.31 | 0.1597 | 0.2683 | 0.5583 | 0.3127 | 0.4814 | 0.5278 | 0.297 | 0.4793 | 0.709 | 0.5367 | 0.6968 | 0.3315 | 0.5848 | 0.2429 | 0.4402 | 0.2198 | 0.4477 | 0.32 | 0.4693 | | 10.4746 | 37.0 | 3959 | 11.9312 | 0.3304 | 0.6097 | 0.3073 | 0.1444 | 0.2765 | 0.5464 | 0.3185 | 0.4809 | 0.536 | 0.3159 | 0.4909 | 0.7086 | 0.5382 | 0.6923 | 0.3265 | 0.6013 | 0.2545 | 0.454 | 0.213 | 0.4538 | 0.3198 | 0.4787 | | 9.9527 | 38.0 | 4066 | 11.9053 | 0.3355 | 0.6135 | 0.3116 | 0.1443 | 0.283 | 0.5541 | 0.3188 | 0.4856 | 0.5333 | 0.3009 | 0.4804 | 0.7224 | 0.5382 | 0.6919 | 0.3493 | 0.5962 | 0.2481 | 0.4527 | 0.2271 | 0.4523 | 0.3151 | 0.4733 | | 9.9527 | 39.0 | 4173 | 11.9321 | 0.331 | 0.6118 | 0.3094 | 0.1417 | 0.2754 | 0.5537 | 0.313 | 0.4817 | 0.5325 | 0.3038 | 0.4847 | 0.7139 | 0.538 | 0.6995 | 0.3312 | 0.5949 | 0.2491 | 0.4549 | 0.2289 | 0.4415 | 0.3079 | 0.4716 | | 9.9527 | 40.0 | 4280 | 11.8940 | 0.3317 | 0.6135 | 0.3061 | 0.1432 | 0.276 | 0.5536 | 0.3136 | 0.4796 | 0.5283 | 0.2983 | 0.477 | 0.715 | 0.5378 | 0.7009 | 0.3298 | 0.5823 | 0.2506 | 0.4496 | 0.2308 | 0.4446 | 0.3095 | 0.464 | ### Framework versions - Transformers 4.52.3 - Pytorch 2.7.0+cu126 - Datasets 3.6.0 - Tokenizers 0.21.1
dhschaves/pedrogarden
dhschaves
2025-05-30T05:47:59Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-05-30T05:15:50Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: PEDGAR --- # Pedrogarden <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `PEDGAR` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "PEDGAR", "lora_weights": "https://huggingface.co/dhschaves/pedrogarden/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('dhschaves/pedrogarden', weight_name='lora.safetensors') image = pipeline('PEDGAR').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 1000 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/dhschaves/pedrogarden/discussions) to add images that show off what you’ve made with this LoRA.
voidful/gemma-3-omni-27b-it
voidful
2025-05-30T05:43:57Z
2,507
0
transformers
[ "transformers", "safetensors", "gemma3", "feature-extraction", "any-to-any", "custom_code", "arxiv:1910.09700", "text-generation-inference", "endpoints_compatible", "region:us" ]
any-to-any
2025-05-23T10:11:22Z
--- library_name: transformers pipeline_tag: any-to-any --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]