modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-08-02 18:27:42
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
549 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-08-02 18:24:50
card
stringlengths
11
1.01M
luckycanucky/me-x6
luckycanucky
2025-06-20T02:52:32Z
0
0
transformers
[ "transformers", "pytorch", "gguf", "llama", "text-generation", "text-generation-inference", "unsloth", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-06-20T02:20:06Z
--- base_model: unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - gguf license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** luckycanucky - **License:** apache-2.0 - **Finetuned from model :** unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
yshr-926/bert-base-japanese-v3-wrime-sentiment
yshr-926
2025-06-20T02:51:36Z
0
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-06-20T02:51:13Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
lora456/lindaaaa
lora456
2025-06-20T02:49:22Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2025-06-20T02:48:42Z
--- license: creativeml-openrail-m ---
RichardErkhov/mergekit-community_-_test_Skywork-o1-Open-Llama_blob_RPmaxguidance-gguf
RichardErkhov
2025-06-20T02:48:35Z
0
0
null
[ "gguf", "endpoints_compatible", "region:us", "conversational" ]
null
2025-06-20T01:39:37Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) test_Skywork-o1-Open-Llama_blob_RPmaxguidance - GGUF - Model creator: https://huggingface.co/mergekit-community/ - Original model: https://huggingface.co/mergekit-community/test_Skywork-o1-Open-Llama_blob_RPmaxguidance/ | Name | Quant method | Size | | ---- | ---- | ---- | | [test_Skywork-o1-Open-Llama_blob_RPmaxguidance.Q2_K.gguf](https://huggingface.co/RichardErkhov/mergekit-community_-_test_Skywork-o1-Open-Llama_blob_RPmaxguidance-gguf/blob/main/test_Skywork-o1-Open-Llama_blob_RPmaxguidance.Q2_K.gguf) | Q2_K | 2.96GB | | [test_Skywork-o1-Open-Llama_blob_RPmaxguidance.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/mergekit-community_-_test_Skywork-o1-Open-Llama_blob_RPmaxguidance-gguf/blob/main/test_Skywork-o1-Open-Llama_blob_RPmaxguidance.IQ3_XS.gguf) | IQ3_XS | 3.28GB | | [test_Skywork-o1-Open-Llama_blob_RPmaxguidance.IQ3_S.gguf](https://huggingface.co/RichardErkhov/mergekit-community_-_test_Skywork-o1-Open-Llama_blob_RPmaxguidance-gguf/blob/main/test_Skywork-o1-Open-Llama_blob_RPmaxguidance.IQ3_S.gguf) | IQ3_S | 3.43GB | | [test_Skywork-o1-Open-Llama_blob_RPmaxguidance.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/mergekit-community_-_test_Skywork-o1-Open-Llama_blob_RPmaxguidance-gguf/blob/main/test_Skywork-o1-Open-Llama_blob_RPmaxguidance.Q3_K_S.gguf) | Q3_K_S | 3.41GB | | [test_Skywork-o1-Open-Llama_blob_RPmaxguidance.IQ3_M.gguf](https://huggingface.co/RichardErkhov/mergekit-community_-_test_Skywork-o1-Open-Llama_blob_RPmaxguidance-gguf/blob/main/test_Skywork-o1-Open-Llama_blob_RPmaxguidance.IQ3_M.gguf) | IQ3_M | 3.52GB | | [test_Skywork-o1-Open-Llama_blob_RPmaxguidance.Q3_K.gguf](https://huggingface.co/RichardErkhov/mergekit-community_-_test_Skywork-o1-Open-Llama_blob_RPmaxguidance-gguf/blob/main/test_Skywork-o1-Open-Llama_blob_RPmaxguidance.Q3_K.gguf) | Q3_K | 3.74GB | | [test_Skywork-o1-Open-Llama_blob_RPmaxguidance.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/mergekit-community_-_test_Skywork-o1-Open-Llama_blob_RPmaxguidance-gguf/blob/main/test_Skywork-o1-Open-Llama_blob_RPmaxguidance.Q3_K_M.gguf) | Q3_K_M | 3.74GB | | [test_Skywork-o1-Open-Llama_blob_RPmaxguidance.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/mergekit-community_-_test_Skywork-o1-Open-Llama_blob_RPmaxguidance-gguf/blob/main/test_Skywork-o1-Open-Llama_blob_RPmaxguidance.Q3_K_L.gguf) | Q3_K_L | 4.03GB | | [test_Skywork-o1-Open-Llama_blob_RPmaxguidance.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/mergekit-community_-_test_Skywork-o1-Open-Llama_blob_RPmaxguidance-gguf/blob/main/test_Skywork-o1-Open-Llama_blob_RPmaxguidance.IQ4_XS.gguf) | IQ4_XS | 4.18GB | | [test_Skywork-o1-Open-Llama_blob_RPmaxguidance.Q4_0.gguf](https://huggingface.co/RichardErkhov/mergekit-community_-_test_Skywork-o1-Open-Llama_blob_RPmaxguidance-gguf/blob/main/test_Skywork-o1-Open-Llama_blob_RPmaxguidance.Q4_0.gguf) | Q4_0 | 4.34GB | | [test_Skywork-o1-Open-Llama_blob_RPmaxguidance.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/mergekit-community_-_test_Skywork-o1-Open-Llama_blob_RPmaxguidance-gguf/blob/main/test_Skywork-o1-Open-Llama_blob_RPmaxguidance.IQ4_NL.gguf) | IQ4_NL | 4.38GB | | [test_Skywork-o1-Open-Llama_blob_RPmaxguidance.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/mergekit-community_-_test_Skywork-o1-Open-Llama_blob_RPmaxguidance-gguf/blob/main/test_Skywork-o1-Open-Llama_blob_RPmaxguidance.Q4_K_S.gguf) | Q4_K_S | 4.37GB | | [test_Skywork-o1-Open-Llama_blob_RPmaxguidance.Q4_K.gguf](https://huggingface.co/RichardErkhov/mergekit-community_-_test_Skywork-o1-Open-Llama_blob_RPmaxguidance-gguf/blob/main/test_Skywork-o1-Open-Llama_blob_RPmaxguidance.Q4_K.gguf) | Q4_K | 4.58GB | | [test_Skywork-o1-Open-Llama_blob_RPmaxguidance.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/mergekit-community_-_test_Skywork-o1-Open-Llama_blob_RPmaxguidance-gguf/blob/main/test_Skywork-o1-Open-Llama_blob_RPmaxguidance.Q4_K_M.gguf) | Q4_K_M | 4.58GB | | [test_Skywork-o1-Open-Llama_blob_RPmaxguidance.Q4_1.gguf](https://huggingface.co/RichardErkhov/mergekit-community_-_test_Skywork-o1-Open-Llama_blob_RPmaxguidance-gguf/blob/main/test_Skywork-o1-Open-Llama_blob_RPmaxguidance.Q4_1.gguf) | Q4_1 | 4.78GB | | [test_Skywork-o1-Open-Llama_blob_RPmaxguidance.Q5_0.gguf](https://huggingface.co/RichardErkhov/mergekit-community_-_test_Skywork-o1-Open-Llama_blob_RPmaxguidance-gguf/blob/main/test_Skywork-o1-Open-Llama_blob_RPmaxguidance.Q5_0.gguf) | Q5_0 | 5.21GB | | [test_Skywork-o1-Open-Llama_blob_RPmaxguidance.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/mergekit-community_-_test_Skywork-o1-Open-Llama_blob_RPmaxguidance-gguf/blob/main/test_Skywork-o1-Open-Llama_blob_RPmaxguidance.Q5_K_S.gguf) | Q5_K_S | 5.21GB | | [test_Skywork-o1-Open-Llama_blob_RPmaxguidance.Q5_K.gguf](https://huggingface.co/RichardErkhov/mergekit-community_-_test_Skywork-o1-Open-Llama_blob_RPmaxguidance-gguf/blob/main/test_Skywork-o1-Open-Llama_blob_RPmaxguidance.Q5_K.gguf) | Q5_K | 5.34GB | | [test_Skywork-o1-Open-Llama_blob_RPmaxguidance.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/mergekit-community_-_test_Skywork-o1-Open-Llama_blob_RPmaxguidance-gguf/blob/main/test_Skywork-o1-Open-Llama_blob_RPmaxguidance.Q5_K_M.gguf) | Q5_K_M | 5.34GB | | [test_Skywork-o1-Open-Llama_blob_RPmaxguidance.Q5_1.gguf](https://huggingface.co/RichardErkhov/mergekit-community_-_test_Skywork-o1-Open-Llama_blob_RPmaxguidance-gguf/blob/main/test_Skywork-o1-Open-Llama_blob_RPmaxguidance.Q5_1.gguf) | Q5_1 | 5.65GB | | [test_Skywork-o1-Open-Llama_blob_RPmaxguidance.Q6_K.gguf](https://huggingface.co/RichardErkhov/mergekit-community_-_test_Skywork-o1-Open-Llama_blob_RPmaxguidance-gguf/blob/main/test_Skywork-o1-Open-Llama_blob_RPmaxguidance.Q6_K.gguf) | Q6_K | 6.14GB | | [test_Skywork-o1-Open-Llama_blob_RPmaxguidance.Q8_0.gguf](https://huggingface.co/RichardErkhov/mergekit-community_-_test_Skywork-o1-Open-Llama_blob_RPmaxguidance-gguf/blob/main/test_Skywork-o1-Open-Llama_blob_RPmaxguidance.Q8_0.gguf) | Q8_0 | 7.95GB | Original model description: --- base_model: - Skywork/Skywork-o1-Open-Llama-3.1-8B - ArliAI/Llama-3.1-8B-ArliAI-Formax-v1.0 - ArliAI/Llama-3.1-8B-ArliAI-RPMax-v1.1 - ArliAI/Llama-3.1-8B-ArliAI-RPMax-v1.3 - Solshine/reflection-llama-3.1-8B - ArliAI/Llama-3.1-8B-ArliAI-RPMax-v1.2 - Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2 library_name: transformers tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the della_linear merge method using [Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2](https://huggingface.co/Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2) as a base. ### Models Merged The following models were included in the merge: * [Skywork/Skywork-o1-Open-Llama-3.1-8B](https://huggingface.co/Skywork/Skywork-o1-Open-Llama-3.1-8B) * [ArliAI/Llama-3.1-8B-ArliAI-Formax-v1.0](https://huggingface.co/ArliAI/Llama-3.1-8B-ArliAI-Formax-v1.0) * [ArliAI/Llama-3.1-8B-ArliAI-RPMax-v1.1](https://huggingface.co/ArliAI/Llama-3.1-8B-ArliAI-RPMax-v1.1) * [ArliAI/Llama-3.1-8B-ArliAI-RPMax-v1.3](https://huggingface.co/ArliAI/Llama-3.1-8B-ArliAI-RPMax-v1.3) * [Solshine/reflection-llama-3.1-8B](https://huggingface.co/Solshine/reflection-llama-3.1-8B) * [ArliAI/Llama-3.1-8B-ArliAI-RPMax-v1.2](https://huggingface.co/ArliAI/Llama-3.1-8B-ArliAI-RPMax-v1.2) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: ArliAI/Llama-3.1-8B-ArliAI-RPMax-v1.3 parameters: density: 0.8 weight: 0.6 - model: Solshine/reflection-llama-3.1-8B parameters: density: 0.5 weight: 0.6 - model: Skywork/Skywork-o1-Open-Llama-3.1-8B parameters: density: 0.5 weight: 0.6 - model: ArliAI/Llama-3.1-8B-ArliAI-RPMax-v1.2 parameters: density: 0.8 weight: 0.6 - model: ArliAI/Llama-3.1-8B-ArliAI-RPMax-v1.1 parameters: density: 0.8 weight: 0.6 - model: ArliAI/Llama-3.1-8B-ArliAI-Formax-v1.0 parameters: density: 0.3 weight: 0.3 merge_method: della_linear base_model: Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2 parameters: normalize: false int8_mask: true dtype: float16 ```
tranthanhnguyenai1/CoderAG_Q17B_Q7B
tranthanhnguyenai1
2025-06-20T02:46:57Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "qwen3", "trl", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-06-19T17:30:02Z
--- base_model: unsloth/qwen3-1.7b-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - qwen3 - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** tranthanhnguyenai1 - **License:** apache-2.0 - **Finetuned from model :** unsloth/qwen3-1.7b-unsloth-bnb-4bit This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Triangle104/Huihui-Qwen3-8B-abliterated-v2-Q4_K_M-GGUF
Triangle104
2025-06-20T02:40:33Z
0
0
transformers
[ "transformers", "gguf", "chat", "abliterated", "uncensored", "llama-cpp", "gguf-my-repo", "text-generation", "base_model:huihui-ai/Huihui-Qwen3-8B-abliterated-v2", "base_model:quantized:huihui-ai/Huihui-Qwen3-8B-abliterated-v2", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-generation
2025-06-20T02:40:11Z
--- library_name: transformers license: apache-2.0 license_link: https://huggingface.co/Qwen/Qwen3-8B/blob/main/LICENSE pipeline_tag: text-generation base_model: huihui-ai/Huihui-Qwen3-8B-abliterated-v2 tags: - chat - abliterated - uncensored - llama-cpp - gguf-my-repo --- # Triangle104/Huihui-Qwen3-8B-abliterated-v2-Q4_K_M-GGUF This model was converted to GGUF format from [`huihui-ai/Huihui-Qwen3-8B-abliterated-v2`](https://huggingface.co/huihui-ai/Huihui-Qwen3-8B-abliterated-v2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/huihui-ai/Huihui-Qwen3-8B-abliterated-v2) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo Triangle104/Huihui-Qwen3-8B-abliterated-v2-Q4_K_M-GGUF --hf-file huihui-qwen3-8b-abliterated-v2-q4_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo Triangle104/Huihui-Qwen3-8B-abliterated-v2-Q4_K_M-GGUF --hf-file huihui-qwen3-8b-abliterated-v2-q4_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo Triangle104/Huihui-Qwen3-8B-abliterated-v2-Q4_K_M-GGUF --hf-file huihui-qwen3-8b-abliterated-v2-q4_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo Triangle104/Huihui-Qwen3-8B-abliterated-v2-Q4_K_M-GGUF --hf-file huihui-qwen3-8b-abliterated-v2-q4_k_m.gguf -c 2048 ```
NovaSkar/sparktts-ml
NovaSkar
2025-06-20T02:39:10Z
0
1
null
[ "safetensors", "qwen2", "region:us" ]
null
2025-06-19T09:10:53Z
--- {} --- --- license: apache-2.0 language: - en - id - ms - th - es - tl pipeline_tag: text-to-speech text-to-speech model based on spark-tts, it supports English, Indonesian, Malay, Thai, Spanish, Tagalog for inference, you can just ues the code from https://github.com/SparkAudio/Spark-TTS ,just repalce the LLM model folder with this project. inference with text prompt may cause some empty audio, can inference without text prompt, this can avoid the issues, but it may come at the cost of reduced performance.
Montecarlo2024/Llama3.1-8B-Nemotron-Python-v2_Q5_k_m
Montecarlo2024
2025-06-20T02:38:58Z
0
0
transformers
[ "transformers", "gguf", "llama", "text-generation-inference", "unsloth", "en", "base_model:nvidia/Llama-3.1-Nemotron-Nano-4B-v1.1", "base_model:quantized:nvidia/Llama-3.1-Nemotron-Nano-4B-v1.1", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-06-20T02:38:11Z
--- base_model: nvidia/Llama-3.1-Nemotron-Nano-4B-v1.1 tags: - text-generation-inference - transformers - unsloth - llama - gguf license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** Montecarlo2024 - **License:** apache-2.0 - **Finetuned from model :** nvidia/Llama-3.1-Nemotron-Nano-4B-v1.1 This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
vuitton/21v1scrip_38
vuitton
2025-06-20T02:38:50Z
0
0
null
[ "safetensors", "any-to-any", "omega", "omegalabs", "bittensor", "agi", "license:mit", "region:us" ]
any-to-any
2025-06-18T17:03:06Z
--- license: mit tags: - any-to-any - omega - omegalabs - bittensor - agi --- This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet. Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
fuadsm/ckpt
fuadsm
2025-06-20T02:37:48Z
0
0
null
[ "license:other", "region:us" ]
null
2025-04-16T13:09:15Z
--- license: other license_name: faipl-1.0-sd license_link: https://freedevproject.org/faipl-1.0-sd/ ---
hardlyworking/Final4BRC3-Q4_0-GGUF
hardlyworking
2025-06-20T02:37:22Z
0
0
transformers
[ "transformers", "gguf", "axolotl", "generated_from_trainer", "llama-cpp", "gguf-my-repo", "dataset:ResplendentAI/Luna_NSFW_Text", "dataset:ResplendentAI/Sissification_Hypno_1k", "dataset:ResplendentAI/Synthetic_Soul_1k", "base_model:hardlyworking/Final4BRC3", "base_model:quantized:hardlyworking/Final4BRC3", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
null
2025-06-20T02:37:09Z
--- library_name: transformers license: cc-by-nc-4.0 base_model: hardlyworking/Final4BRC3 tags: - axolotl - generated_from_trainer - llama-cpp - gguf-my-repo datasets: - ResplendentAI/Luna_NSFW_Text - ResplendentAI/Sissification_Hypno_1k - ResplendentAI/Synthetic_Soul_1k model-index: - name: Final4BRC results: [] --- # hardlyworking/Final4BRC3-Q4_0-GGUF This model was converted to GGUF format from [`hardlyworking/Final4BRC3`](https://huggingface.co/hardlyworking/Final4BRC3) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/hardlyworking/Final4BRC3) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo hardlyworking/Final4BRC3-Q4_0-GGUF --hf-file final4brc3-q4_0.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo hardlyworking/Final4BRC3-Q4_0-GGUF --hf-file final4brc3-q4_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo hardlyworking/Final4BRC3-Q4_0-GGUF --hf-file final4brc3-q4_0.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo hardlyworking/Final4BRC3-Q4_0-GGUF --hf-file final4brc3-q4_0.gguf -c 2048 ```
hsuwill000/LCM-kotosmix_diffusers-openvino
hsuwill000
2025-06-20T02:32:19Z
0
0
null
[ "openvino", "openvino-export", "base_model:spamsoms/LCM-kotosmix_diffusers", "base_model:finetune:spamsoms/LCM-kotosmix_diffusers", "region:us" ]
null
2024-12-14T19:48:41Z
--- base_model: spamsoms/LCM-kotosmix_diffusers tags: - openvino - openvino-export --- This model was converted to OpenVINO from [`spamsoms/LCM-kotosmix_diffusers`](https://huggingface.co/spamsoms/LCM-kotosmix_diffusers) using [optimum-intel](https://github.com/huggingface/optimum-intel) via the [export](https://huggingface.co/spaces/echarlaix/openvino-export) space. First make sure you have optimum-intel installed: ```bash pip install optimum[openvino] ``` To load your model you can do as follows: ```python import huggingface_hub as hf_hub from optimum.intel import OVStableDiffusionPipeline from diffusers import LCMScheduler import torch model_id = "hsuwill000/LCM-kotosmix_diffusers-openvino" HIGH = 1024 WIDTH = 1024 batch_size = -1 # Or set it to a specific positive integer if needed prompt="agirl, anime," negative_prompt="(deformed, distorted, disfigured:1.3), poorly drawn, bad anatomy, wrong anatomy,\ extra limb, missing limb, floating limbs, (mutated hands and fingers:1.4), disconnected limbs, \ mutation, mutated, ugly, disgusting, blurry, amputation" pipe = OVStableDiffusionPipeline.from_pretrained( model_id, compile=False, ov_config={"CACHE_DIR": ""}, torch_dtype=torch.bfloat16, # More standard dtype for speed safety_checker=None, use_safetensors=False, ) pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config) print(pipe.scheduler.compatibles) pipe.reshape(batch_size=batch_size, height=HIGH, width=WIDTH, num_images_per_prompt=1) pipe.compile() image = pipe( prompt=prompt, negative_prompt=negative_prompt, width=WIDTH, height=HIGH, guidance_scale=2, num_inference_steps=4, num_images_per_prompt=1, ).images[0] image.save("test.png") ```
Montecarlo2024/Llama3.1-8B-Nemotron-Python-v2_Q8_0
Montecarlo2024
2025-06-20T02:29:39Z
0
0
transformers
[ "transformers", "gguf", "llama", "text-generation-inference", "unsloth", "en", "base_model:nvidia/Llama-3.1-Nemotron-Nano-4B-v1.1", "base_model:quantized:nvidia/Llama-3.1-Nemotron-Nano-4B-v1.1", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-06-20T02:28:30Z
--- base_model: nvidia/Llama-3.1-Nemotron-Nano-4B-v1.1 tags: - text-generation-inference - transformers - unsloth - llama - gguf license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** Montecarlo2024 - **License:** apache-2.0 - **Finetuned from model :** nvidia/Llama-3.1-Nemotron-Nano-4B-v1.1 This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Kimanjea/prompt-technique
Kimanjea
2025-06-20T02:25:22Z
0
0
mlx
[ "mlx", "safetensors", "llama", "facebook", "meta", "pytorch", "llama-3", "text-generation", "conversational", "en", "de", "fr", "it", "pt", "hi", "es", "th", "license:llama3.2", "region:us" ]
text-generation
2025-06-20T01:15:08Z
--- language: - en - de - fr - it - pt - hi - es - th library_name: mlx pipeline_tag: text-generation tags: - facebook - meta - pytorch - llama - llama-3 - mlx license: llama3.2 extra_gated_prompt: "### LLAMA 3.2 COMMUNITY LICENSE AGREEMENT\n\nLlama 3.2 Version\ \ Release Date: September 25, 2024\n\n“Agreement” means the terms and conditions\ \ for use, reproduction, distribution and modification of the Llama Materials set\ \ forth herein.\n\n“Documentation” means the specifications, manuals and documentation\ \ accompanying Llama 3.2 distributed by Meta at https://llama.meta.com/doc/overview.\n\ \n“Licensee” or “you” means you, or your employer or any other person or entity\ \ (if you are entering into this Agreement on such person or entity’s behalf),\ \ of the age required under applicable laws, rules or regulations to provide legal\ \ consent and that has legal authority to bind your employer or such other person\ \ or entity if you are entering in this Agreement on their behalf.\n\n“Llama 3.2”\ \ means the foundational large language models and software and algorithms, including\ \ machine-learning model code, trained model weights, inference-enabling code, training-enabling\ \ code, fine-tuning enabling code and other elements of the foregoing distributed\ \ by Meta at https://www.llama.com/llama-downloads.\n\n“Llama Materials” means,\ \ collectively, Meta’s proprietary Llama 3.2 and Documentation (and any portion\ \ thereof) made available under this Agreement.\n\n“Meta” or “we” means Meta Platforms\ \ Ireland Limited (if you are located in or, if you are an entity, your principal\ \ place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if\ \ you are located outside of the EEA or Switzerland). \n\nBy clicking “I Accept”\ \ below or by using or distributing any portion or element of the Llama Materials,\ \ you agree to be bound by this Agreement.\n\n1. License Rights and Redistribution.\n\ a. Grant of Rights. You are granted a non-exclusive, worldwide, non-transferable\ \ and royalty-free limited license under Meta’s intellectual property or other rights\ \ owned by Meta embodied in the Llama Materials to use, reproduce, distribute,\ \ copy, create derivative works of, and make modifications to the Llama Materials.\ \ \nb. Redistribution and Use. \ni. If you distribute or make available the Llama\ \ Materials (or any derivative works thereof), or a product or service (including\ \ another AI model) that contains any of them, you shall (A) provide a copy of this\ \ Agreement with any such Llama Materials; and (B) prominently display “Built with\ \ Llama” on a related website, user interface, blogpost, about page, or product\ \ documentation. If you use the Llama Materials or any outputs or results of the\ \ Llama Materials to create, train, fine tune, or otherwise improve an AI model,\ \ which is distributed or made available, you shall also include “Llama” at the\ \ beginning of any such AI model name.\nii. If you receive Llama Materials, or any\ \ derivative works thereof, from a Licensee as part of an integrated end user product,\ \ then Section 2 of this Agreement will not apply to you. \niii. You must retain\ \ in all copies of the Llama Materials that you distribute the following attribution\ \ notice within a “Notice” text file distributed as a part of such copies: “Llama\ \ 3.2 is licensed under the Llama 3.2 Community License, Copyright © Meta Platforms,\ \ Inc. All Rights Reserved.”\niv. Your use of the Llama Materials must comply with\ \ applicable laws and regulations (including trade compliance laws and regulations)\ \ and adhere to the Acceptable Use Policy for the Llama Materials (available at\ \ https://www.llama.com/llama3_2/use-policy), which is hereby incorporated by reference\ \ into this Agreement.\n \n2. Additional Commercial Terms. If, on the Llama 3.2\ \ version release date, the monthly active users of the products or services made\ \ available by or for Licensee, or Licensee’s affiliates, is greater than 700 million\ \ monthly active users in the preceding calendar month, you must request a license\ \ from Meta, which Meta may grant to you in its sole discretion, and you are not\ \ authorized to exercise any of the rights under this Agreement unless or until\ \ Meta otherwise expressly grants you such rights.\n3. Disclaimer of Warranty. UNLESS\ \ REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM\ \ ARE PROVIDED ON AN “AS IS” BASIS, WITHOUT WARRANTIES OF ANY KIND, AND META DISCLAIMS\ \ ALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND IMPLIED, INCLUDING, WITHOUT LIMITATION,\ \ ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR\ \ PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING\ \ OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR\ \ USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS.\n4. Limitation of Liability.\ \ IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY,\ \ WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING\ \ OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL,\ \ INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE\ \ BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING.\n5. Intellectual Property.\n\ a. No trademark licenses are granted under this Agreement, and in connection with\ \ the Llama Materials, neither Meta nor Licensee may use any name or mark owned\ \ by or associated with the other or any of its affiliates, except as required\ \ for reasonable and customary use in describing and redistributing the Llama Materials\ \ or as set forth in this Section 5(a). Meta hereby grants you a license to use\ \ “Llama” (the “Mark”) solely as required to comply with the last sentence of Section\ \ 1.b.i. You will comply with Meta’s brand guidelines (currently accessible at\ \ https://about.meta.com/brand/resources/meta/company-brand/). All goodwill arising\ \ out of your use of the Mark will inure to the benefit of Meta.\nb. Subject to\ \ Meta’s ownership of Llama Materials and derivatives made by or for Meta, with\ \ respect to any derivative works and modifications of the Llama Materials that\ \ are made by you, as between you and Meta, you are and will be the owner of such\ \ derivative works and modifications.\nc. If you institute litigation or other proceedings\ \ against Meta or any entity (including a cross-claim or counterclaim in a lawsuit)\ \ alleging that the Llama Materials or Llama 3.2 outputs or results, or any portion\ \ of any of the foregoing, constitutes infringement of intellectual property or\ \ other rights owned or licensable by you, then any licenses granted to you under\ \ this Agreement shall terminate as of the date such litigation or claim is filed\ \ or instituted. You will indemnify and hold harmless Meta from and against any\ \ claim by any third party arising out of or related to your use or distribution\ \ of the Llama Materials.\n6. Term and Termination. The term of this Agreement will\ \ commence upon your acceptance of this Agreement or access to the Llama Materials\ \ and will continue in full force and effect until terminated in accordance with\ \ the terms and conditions herein. Meta may terminate this Agreement if you are\ \ in breach of any term or condition of this Agreement. Upon termination of this\ \ Agreement, you shall delete and cease use of the Llama Materials. Sections 3,\ \ 4 and 7 shall survive the termination of this Agreement. \n7. Governing Law and\ \ Jurisdiction. This Agreement will be governed and construed under the laws of\ \ the State of California without regard to choice of law principles, and the UN\ \ Convention on Contracts for the International Sale of Goods does not apply to\ \ this Agreement. The courts of California shall have exclusive jurisdiction of\ \ any dispute arising out of this Agreement. \n### Llama 3.2 Acceptable Use Policy\n\ Meta is committed to promoting safe and fair use of its tools and features, including\ \ Llama 3.2. If you access or use Llama 3.2, you agree to this Acceptable Use Policy\ \ (“**Policy**”). The most recent copy of this policy can be found at [https://www.llama.com/llama3_2/use-policy](https://www.llama.com/llama3_2/use-policy).\n\ #### Prohibited Uses\nWe want everyone to use Llama 3.2 safely and responsibly.\ \ You agree you will not use, or allow others to use, Llama 3.2 to:\n1. Violate\ \ the law or others’ rights, including to:\n 1. Engage in, promote, generate,\ \ contribute to, encourage, plan, incite, or further illegal or unlawful activity\ \ or content, such as:\n 1. Violence or terrorism\n 2. Exploitation\ \ or harm to children, including the solicitation, creation, acquisition, or dissemination\ \ of child exploitative content or failure to report Child Sexual Abuse Material\n\ \ 3. Human trafficking, exploitation, and sexual violence\n 4. The\ \ illegal distribution of information or materials to minors, including obscene\ \ materials, or failure to employ legally required age-gating in connection with\ \ such information or materials.\n 5. Sexual solicitation\n 6. Any\ \ other criminal activity\n 1. Engage in, promote, incite, or facilitate the\ \ harassment, abuse, threatening, or bullying of individuals or groups of individuals\n\ \ 2. Engage in, promote, incite, or facilitate discrimination or other unlawful\ \ or harmful conduct in the provision of employment, employment benefits, credit,\ \ housing, other economic benefits, or other essential goods and services\n 3.\ \ Engage in the unauthorized or unlicensed practice of any profession including,\ \ but not limited to, financial, legal, medical/health, or related professional\ \ practices\n 4. Collect, process, disclose, generate, or infer private or sensitive\ \ information about individuals, including information about individuals’ identity,\ \ health, or demographic information, unless you have obtained the right to do so\ \ in accordance with applicable law\n 5. Engage in or facilitate any action or\ \ generate any content that infringes, misappropriates, or otherwise violates any\ \ third-party rights, including the outputs or results of any products or services\ \ using the Llama Materials\n 6. Create, generate, or facilitate the creation\ \ of malicious code, malware, computer viruses or do anything else that could disable,\ \ overburden, interfere with or impair the proper working, integrity, operation\ \ or appearance of a website or computer system\n 7. Engage in any action, or\ \ facilitate any action, to intentionally circumvent or remove usage restrictions\ \ or other safety measures, or to enable functionality disabled by Meta \n2. Engage\ \ in, promote, incite, facilitate, or assist in the planning or development of activities\ \ that present a risk of death or bodily harm to individuals, including use of Llama\ \ 3.2 related to the following:\n 8. Military, warfare, nuclear industries or\ \ applications, espionage, use for materials or activities that are subject to the\ \ International Traffic Arms Regulations (ITAR) maintained by the United States\ \ Department of State or to the U.S. Biological Weapons Anti-Terrorism Act of 1989\ \ or the Chemical Weapons Convention Implementation Act of 1997\n 9. Guns and\ \ illegal weapons (including weapon development)\n 10. Illegal drugs and regulated/controlled\ \ substances\n 11. Operation of critical infrastructure, transportation technologies,\ \ or heavy machinery\n 12. Self-harm or harm to others, including suicide, cutting,\ \ and eating disorders\n 13. Any content intended to incite or promote violence,\ \ abuse, or any infliction of bodily harm to an individual\n3. Intentionally deceive\ \ or mislead others, including use of Llama 3.2 related to the following:\n 14.\ \ Generating, promoting, or furthering fraud or the creation or promotion of disinformation\n\ \ 15. Generating, promoting, or furthering defamatory content, including the\ \ creation of defamatory statements, images, or other content\n 16. Generating,\ \ promoting, or further distributing spam\n 17. Impersonating another individual\ \ without consent, authorization, or legal right\n 18. Representing that the\ \ use of Llama 3.2 or outputs are human-generated\n 19. Generating or facilitating\ \ false online engagement, including fake reviews and other means of fake online\ \ engagement \n4. Fail to appropriately disclose to end users any known dangers\ \ of your AI system 5. Interact with third party tools, models, or software designed\ \ to generate unlawful content or engage in unlawful or harmful conduct and/or represent\ \ that the outputs of such tools, models, or software are associated with Meta or\ \ Llama 3.2\n\nWith respect to any multimodal models included in Llama 3.2, the\ \ rights granted under Section 1(a) of the Llama 3.2 Community License Agreement\ \ are not being granted to you if you are an individual domiciled in, or a company\ \ with a principal place of business in, the European Union. This restriction does\ \ not apply to end users of a product or service that incorporates any such multimodal\ \ models.\n\nPlease report any violation of this Policy, software “bug,” or other\ \ problems that could lead to a violation of this Policy through one of the following\ \ means:\n\n* Reporting issues with the model: [https://github.com/meta-llama/llama-models/issues](https://l.workplace.com/l.php?u=https%3A%2F%2Fgithub.com%2Fmeta-llama%2Fllama-models%2Fissues&h=AT0qV8W9BFT6NwihiOHRuKYQM_UnkzN_NmHMy91OT55gkLpgi4kQupHUl0ssR4dQsIQ8n3tfd0vtkobvsEvt1l4Ic6GXI2EeuHV8N08OG2WnbAmm0FL4ObkazC6G_256vN0lN9DsykCvCqGZ)\n\ * Reporting risky content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback)\n\ * Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info)\n\ * Reporting violations of the Acceptable Use Policy or unlicensed uses of Llama\ \ 3.2: [email protected]" extra_gated_fields: First Name: text Last Name: text Date of birth: date_picker Country: country Affiliation: text Job title: type: select options: - Student - Research Graduate - AI researcher - AI developer/engineer - Reporter - Other geo: ip_location ? By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the Meta Privacy Policy : checkbox extra_gated_description: The information you provide will be collected, stored, processed and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/). extra_gated_button_content: Submit base_model: meta-llama/llama-3.2-1B-Instruct ---
JayHyeon/Qwen_1.5B-math-VDPO_1e-4_1.0vpo_constant-10ep
JayHyeon
2025-06-20T02:21:06Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "trl", "dpo", "conversational", "dataset:argilla/distilabel-math-preference-dpo", "arxiv:2305.18290", "base_model:Qwen/Qwen2.5-Math-1.5B", "base_model:finetune:Qwen/Qwen2.5-Math-1.5B", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-20T01:37:57Z
--- base_model: Qwen/Qwen2.5-Math-1.5B datasets: argilla/distilabel-math-preference-dpo library_name: transformers model_name: Qwen_1.5B-math-VDPO_1e-4_1.0vpo_constant-10ep tags: - generated_from_trainer - trl - dpo licence: license --- # Model Card for Qwen_1.5B-math-VDPO_1e-4_1.0vpo_constant-10ep This model is a fine-tuned version of [Qwen/Qwen2.5-Math-1.5B](https://huggingface.co/Qwen/Qwen2.5-Math-1.5B) on the [argilla/distilabel-math-preference-dpo](https://huggingface.co/datasets/argilla/distilabel-math-preference-dpo) dataset. It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="JayHyeon/Qwen_1.5B-math-VDPO_1e-4_1.0vpo_constant-10ep", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/bonin147/huggingface/runs/j6m5d8fc) This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290). ### Framework versions - TRL: 0.19.0.dev0 - Transformers: 4.52.4 - Pytorch: 2.7.1 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite DPO as: ```bibtex @inproceedings{rafailov2023direct, title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}}, author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn}, year = 2023, booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023}, url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html}, editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
TencentARC/Moto
TencentARC
2025-06-20T02:16:57Z
0
6
null
[ "arxiv:2412.04445", "license:apache-2.0", "region:us" ]
null
2024-12-03T13:00:08Z
--- license: apache-2.0 --- # Moto: Latent Motion Token as the Bridging Language for Robot Manipulation <a href='https://github.com/TencentARC/Moto'><img src='https://img.shields.io/badge/Github-black'></a> Paper: https://huggingface.co/papers/2412.04445 ## 🚀Introduction >Recent developments in Large Language Models (LLMs) pre-trained on extensive corpora have shown significant success in various natural language processing (NLP) tasks with minimal fine-tuning. >This success offers new promise for robotics, which has long been constrained by the high cost of action-labeled data. We ask: given the abundant video data containing interaction-related knowledge available as a rich "corpus", <b><i>can a similar generative pre-training approach be effectively applied to enhance robot learning?</i></b> The key challenge is to identify an effective representation for autoregressive pre-training that benefits robot manipulation tasks. >Inspired by the way humans learn new skills through observing dynamic environments, we propose that effective robotic learning should emphasize motion-related knowledge, which is closely tied to low-level actions and is hardware-agnostic, facilitating the transfer of learned motions to actual robot actions. > >To this end, we introduce <b>Moto</b>, which converts video content into latent <b>Mo</b>tion <b>To</b>ken sequences by a Latent Motion Tokenizer, learning a bridging "language" of motion from videos in an unsupervised manner. >We pre-train Moto-GPT through motion token autoregression, enabling it to capture diverse visual motion knowledge. After pre-training, Moto-GPT demonstrates the promising ability to produce semantically interpretable motion tokens, predict plausible motion trajectories, and assess trajectory rationality through output likelihood. >To transfer learned motion priors to real robot actions, we implement a co-fine-tuning strategy that seamlessly bridges latent motion token prediction and real robot control. Extensive experiments show that the fine-tuned Moto-GPT exhibits superior robustness and efficiency on robot manipulation benchmarks, underscoring its effectiveness in transferring knowledge from video data to downstream visual manipulations. ## ⚙️Quick Start ### Installation Clone the repo: ```bash git clone https://github.com/TencentARC/Moto.git ``` Install minimal requirements for Moto training and inference: ```bash conda create -n moto python=3.8 conda activate moto cd Moto pip install -r requirements.txt cd .. ``` [Optional] Setup the conda environment for evaluating Moto-GPT on the [CALVIN](https://github.com/mees/calvin) benchmark: ```bash conda create -n moto_for_calvin python=3.8 conda activate moto_for_calvin git clone --recurse-submodules https://github.com/mees/calvin.git pip install setuptools==57.5.0 cd calvin cd calvin_env; git checkout main cd ../calvin_models sed -i 's/pytorch-lightning==1.8.6/pytorch-lightning/g' requirements.txt sed -i 's/torch==1.13.1/torch/g' requirements.txt cd .. sh ./install.sh cd .. sudo apt-get install -y libegl1-mesa libegl1 sudo apt-get install -y libgl1 sudo apt-get install -y libosmesa6-dev sudo apt-get install -y patchelf cd Moto pip install -r requirements.txt cd .. ``` [Optional] Setup the conda environment for evaluating Moto-GPT on the [SIMPLER](https://github.com/simpler-env/SimplerEnv) benchmark: ```bash source /data/miniconda3/bin/activate conda create -n moto_for_simpler python=3.10 -y conda activate moto_for_simpler git clone https://github.com/simpler-env/SimplerEnv --recurse-submodules pip install numpy==1.24.4 cd SimplerEnv/ManiSkill2_real2sim pip install -e . cd SimplerEnv pip install -e . sudo apt install ffmpeg pip install setuptools==58.2.0 pip install tensorflow==2.15.0 pip install -r requirements_full_install.txt pip install tensorflow[and-cuda]==2.15.1 pip install git+https://github.com/nathanrooy/simulated-annealing cd .. cd Moto pip install -r requirements.txt cd .. ``` ### Model Weights We release the Latent Motion Tokenizer, the pre-traiend Moto-GPT, and the fine-tuned Moto-GPT in [Moto Hugging Face](https://huggingface.co/TencentARC/Moto). You can download them separately and save them in corresponding directories (`latent_motion_tokenizer/checkpoints/` and `moto_gpt/checkpoints/`). ## 💻Inference ### Latent trajectory inference with the pre-trained Moto-GPT and the Latent Motion Tokenizer ```bash conda activate moto export PROJECT_ROOT=[your path to Moto project] cd ${PROJECT_ROOT}/scripts nohup bash run_latent_motion_generation.sh > run_latent_motion_generation.log 2>&1 & tail -f run_latent_motion_generation.log ``` ### Evaluating the fine-tuned Moto-GPT on robot manipulation benchmarks Evaluation on CALVIN ```bash conda activate moto_for_calvin export PROJECT_ROOT=[your path to Moto project] cd ${PROJECT_ROOT}/scripts nohup bash evaluate_moto_gpt_in_calvin.sh > evaluate_moto_gpt_in_calvin.log 2>&1 & tail -f evaluate_moto_gpt_in_calvin.log ``` Evaluation on SIMPLER ```bash conda activate moto_for_simpler export PROJECT_ROOT=[your path to Moto project] cd ${PROJECT_ROOT}/scripts nohup bash evaluate_moto_gpt_in_simpler.sh > evaluate_moto_gpt_in_simpler.log 2>&1 & tail -f evaluate_moto_gpt_in_simpler.log ``` ## 📝To Do - [x] Release the Latent Motion Tokenizer - [x] Release the pre-trained and fine-tuned Moto-GPT - [x] Release the inference code - [ ] Release the trainig code ## 🙌Acknowledgement This repo benefits from [Taming Transformers](https://github.com/CompVis/taming-transformers/), [Phenaki-Pytorch](https://github.com/lucidrains/phenaki-pytorch), [GR-1](https://github.com/bytedance/GR-1), [GR1-Training](https://github.com/EDiRobotics/GR1-Training). Thanks for their wonderful works!
aditeyabaral-redis/langcache-crossencoder-v1-ms-marco-MiniLM-L6-v2
aditeyabaral-redis
2025-06-20T02:14:56Z
0
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "cross-encoder", "quora", "text-classification", "sentence-pair-classification", "semantic-similarity", "semantic-search", "retrieval", "reranking", "generated_from_trainer", "dataset_size:363861", "loss:BinaryCrossEntropyLoss", "text-ranking", "en", "arxiv:1908.10084", "base_model:cross-encoder/ms-marco-MiniLM-L6-v2", "base_model:finetune:cross-encoder/ms-marco-MiniLM-L6-v2", "license:apache-2.0", "model-index", "region:us" ]
text-ranking
2025-06-19T22:26:57Z
--- language: - en license: apache-2.0 tags: - cross-encoder - sentence-transformers - quora - text-classification - sentence-pair-classification - semantic-similarity - semantic-search - retrieval - reranking - generated_from_trainer - dataset_size:363861 - loss:BinaryCrossEntropyLoss base_model: cross-encoder/ms-marco-MiniLM-L6-v2 pipeline_tag: text-ranking library_name: sentence-transformers metrics: - accuracy - accuracy_threshold - f1 - f1_threshold - precision - recall - average_precision model-index: - name: Redis semantic caching CrossEncoder model fine-tuned on Quora Question Pairs results: - task: type: cross-encoder-classification name: Cross Encoder Classification dataset: name: quora eval type: quora-eval metrics: - type: accuracy value: 0.6956145341215464 name: Accuracy - type: accuracy_threshold value: 4.168765068054199 name: Accuracy Threshold - type: f1 value: 0.5947228598694901 name: F1 - type: f1_threshold value: 3.341184139251709 name: F1 Threshold - type: precision value: 0.4833759590792839 name: Precision - type: recall value: 0.7727211796246649 name: Recall - type: average_precision value: 0.6228630274737263 name: Average Precision --- # Redis semantic caching CrossEncoder model fine-tuned on Quora Question Pairs This is a [Cross Encoder](https://www.sbert.net/docs/cross_encoder/usage/usage.html) model finetuned from [cross-encoder/ms-marco-MiniLM-L6-v2](https://huggingface.co/cross-encoder/ms-marco-MiniLM-L6-v2) on the Quora Question Pairs LangCache Train Set dataset using the [sentence-transformers](https://www.SBERT.net) library. It computes scores for pairs of texts, which can be used for sentence pair classification. ## Model Details ### Model Description - **Model Type:** Cross Encoder - **Base model:** [cross-encoder/ms-marco-MiniLM-L6-v2](https://huggingface.co/cross-encoder/ms-marco-MiniLM-L6-v2) <!-- at revision ce0834f22110de6d9222af7a7a03628121708969 --> - **Maximum Sequence Length:** 512 tokens - **Number of Output Labels:** 1 label - **Training Dataset:** - Quora Question Pairs LangCache Train Set - **Language:** en - **License:** apache-2.0 ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Documentation:** [Cross Encoder Documentation](https://www.sbert.net/docs/cross_encoder/usage/usage.html) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Cross Encoders on Hugging Face](https://huggingface.co/models?library=sentence-transformers&other=cross-encoder) ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import CrossEncoder # Download from the 🤗 Hub model = CrossEncoder("aditeyabaral-redis/langcache-crossencoder-v1-ms-marco-MiniLM-L6-v2") # Get scores for pairs of texts pairs = [ ['How can I get a list of my Gmail accounts?', 'How can I find all my old Gmail accounts?'], ['How can I stop Quora from modifying and editing other people’s questions on Quora?', 'Can I prevent a Quora user from editing my question on Quora?'], ['How much does it cost to design a logo in india?', 'How much does it cost to design a logo?'], ['What is screenedrenters.com?', 'What is allmyapps.com?'], ['What are the best colleges for an MBA in Australia?', 'What are the top MBA schools in Australia?'], ] scores = model.predict(pairs) print(scores.shape) # (5,) # Or rank different texts based on similarity to a single text ranks = model.rank( 'How can I get a list of my Gmail accounts?', [ 'How can I find all my old Gmail accounts?', 'Can I prevent a Quora user from editing my question on Quora?', 'How much does it cost to design a logo?', 'What is allmyapps.com?', 'What are the top MBA schools in Australia?', ] ) # [{'corpus_id': ..., 'score': ...}, {'corpus_id': ..., 'score': ...}, ...] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> ## Evaluation ### Metrics #### Cross Encoder Classification * Dataset: `quora-eval` * Evaluated with [<code>CrossEncoderClassificationEvaluator</code>](https://sbert.net/docs/package_reference/cross_encoder/evaluation.html#sentence_transformers.cross_encoder.evaluation.CrossEncoderClassificationEvaluator) | Metric | Value | |:----------------------|:-----------| | accuracy | 0.6956 | | accuracy_threshold | 4.1688 | | f1 | 0.5947 | | f1_threshold | 3.3412 | | precision | 0.4834 | | recall | 0.7727 | | **average_precision** | **0.6229** | <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### Quora Question Pairs LangCache Train Set * Dataset: Quora Question Pairs LangCache Train Set * Size: 363,861 training samples * Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>label</code> * Approximate statistics based on the first 1000 samples: | | sentence1 | sentence2 | label | |:--------|:------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------|:------------------------------------------------| | type | string | string | int | | details | <ul><li>min: 15 characters</li><li>mean: 60.22 characters</li><li>max: 229 characters</li></ul> | <ul><li>min: 14 characters</li><li>mean: 60.0 characters</li><li>max: 274 characters</li></ul> | <ul><li>0: ~63.50%</li><li>1: ~36.50%</li></ul> | * Samples: | sentence1 | sentence2 | label | |:-------------------------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------|:---------------| | <code>Why do people believe in God and how can they say he/she exists?</code> | <code>Why do we kill each other in the name of God?</code> | <code>0</code> | | <code>What are the chances of a bee sting when a bee buzzes around you?</code> | <code>How can I tell if my bees are agitated/likely to sting?</code> | <code>0</code> | | <code>If a man from Syro Malankara church marries a Syro-Malabar girl, can they join a Syro-Malabar parish?</code> | <code>Is Malabar Hills of Mumbai anyhow related to Malabar of Kerala?</code> | <code>0</code> | * Loss: [<code>BinaryCrossEntropyLoss</code>](https://sbert.net/docs/package_reference/cross_encoder/losses.html#binarycrossentropyloss) with these parameters: ```json { "activation_fn": "torch.nn.modules.linear.Identity", "pos_weight": null } ``` ### Evaluation Dataset #### Quora Question Pairs LangCache Validation Set * Dataset: Quora Question Pairs LangCache Validation Set * Size: 40,429 evaluation samples * Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>label</code> * Approximate statistics based on the first 1000 samples: | | sentence1 | sentence2 | label | |:--------|:------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------|:------------------------------------------------| | type | string | string | int | | details | <ul><li>min: 13 characters</li><li>mean: 59.91 characters</li><li>max: 266 characters</li></ul> | <ul><li>min: 13 characters</li><li>mean: 59.51 characters</li><li>max: 293 characters</li></ul> | <ul><li>0: ~63.80%</li><li>1: ~36.20%</li></ul> | * Samples: | sentence1 | sentence2 | label | |:------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------|:---------------| | <code>How can I get a list of my Gmail accounts?</code> | <code>How can I find all my old Gmail accounts?</code> | <code>1</code> | | <code>How can I stop Quora from modifying and editing other people’s questions on Quora?</code> | <code>Can I prevent a Quora user from editing my question on Quora?</code> | <code>1</code> | | <code>How much does it cost to design a logo in india?</code> | <code>How much does it cost to design a logo?</code> | <code>0</code> | * Loss: [<code>BinaryCrossEntropyLoss</code>](https://sbert.net/docs/package_reference/cross_encoder/losses.html#binarycrossentropyloss) with these parameters: ```json { "activation_fn": "torch.nn.modules.linear.Identity", "pos_weight": null } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `eval_strategy`: steps - `per_device_train_batch_size`: 64 - `per_device_eval_batch_size`: 64 - `learning_rate`: 0.0002 - `num_train_epochs`: 15 - `load_best_model_at_end`: True - `push_to_hub`: True - `hub_model_id`: aditeyabaral-redis/langcache-crossencoder-v1-ms-marco-MiniLM-L6-v2 #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: steps - `prediction_loss_only`: True - `per_device_train_batch_size`: 64 - `per_device_eval_batch_size`: 64 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `torch_empty_cache_steps`: None - `learning_rate`: 0.0002 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 15 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.0 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: False - `fp16`: False - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: True - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: True - `resume_from_checkpoint`: None - `hub_model_id`: aditeyabaral-redis/langcache-crossencoder-v1-ms-marco-MiniLM-L6-v2 - `hub_strategy`: every_save - `hub_private_repo`: None - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `include_for_metrics`: [] - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `use_liger_kernel`: False - `eval_use_gather_object`: False - `average_tokens_across_devices`: False - `prompts`: None - `batch_sampler`: batch_sampler - `multi_dataset_batch_sampler`: proportional </details> ### Training Logs <details><summary>Click to expand</summary> | Epoch | Step | Training Loss | Validation Loss | quora-eval_average_precision | |:----------:|:--------:|:-------------:|:---------------:|:----------------------------:| | 0.0879 | 500 | 0.3913 | 0.3302 | 0.5603 | | 0.1759 | 1000 | 0.3408 | 0.3220 | 0.5932 | | 0.2638 | 1500 | 0.3318 | 0.3249 | 0.6144 | | 0.3517 | 2000 | 0.3235 | 0.3027 | 0.6280 | | 0.4397 | 2500 | 0.3173 | 0.2944 | 0.6233 | | 0.5276 | 3000 | 0.3049 | 0.3009 | 0.6685 | | 0.6155 | 3500 | 0.3071 | 0.2908 | 0.6221 | | 0.7035 | 4000 | 0.3015 | 0.2854 | 0.6143 | | 0.7914 | 4500 | 0.2944 | 0.2759 | 0.6361 | | 0.8794 | 5000 | 0.2984 | 0.2854 | 0.6616 | | 0.9673 | 5500 | 0.2898 | 0.3002 | 0.6109 | | 1.0552 | 6000 | 0.2552 | 0.2800 | 0.6466 | | 1.1432 | 6500 | 0.2352 | 0.2821 | 0.6305 | | 1.2311 | 7000 | 0.2366 | 0.2778 | 0.5699 | | 1.3190 | 7500 | 0.2332 | 0.2831 | 0.6076 | | 1.4070 | 8000 | 0.2366 | 0.2783 | 0.6003 | | 1.4949 | 8500 | 0.2391 | 0.2716 | 0.6195 | | **1.5828** | **9000** | **0.241** | **0.2685** | **0.6229** | | 1.6708 | 9500 | 0.2359 | 0.2804 | 0.6410 | | 1.7587 | 10000 | 0.2374 | 0.2819 | 0.6448 | | 1.8466 | 10500 | 0.2387 | 0.2750 | 0.6479 | | 1.9346 | 11000 | 0.2343 | 0.2734 | 0.6034 | | 2.0225 | 11500 | 0.2193 | 0.3168 | 0.6384 | | 2.1104 | 12000 | 0.1741 | 0.3011 | 0.6189 | | 2.1984 | 12500 | 0.1732 | 0.2988 | 0.6412 | | 2.2863 | 13000 | 0.1814 | 0.2839 | 0.6156 | | 2.3743 | 13500 | 0.1815 | 0.2930 | 0.5520 | | 2.4622 | 14000 | 0.1774 | 0.3461 | 0.6195 | | 2.5501 | 14500 | 0.1886 | 0.3033 | 0.6113 | | 2.6381 | 15000 | 0.1831 | 0.2925 | 0.5815 | | 2.7260 | 15500 | 0.1889 | 0.2801 | 0.5701 | | 2.8139 | 16000 | 0.1869 | 0.2893 | 0.6090 | | 2.9019 | 16500 | 0.1896 | 0.3038 | 0.6142 | | 2.9898 | 17000 | 0.1967 | 0.2791 | 0.5967 | | 3.0777 | 17500 | 0.1395 | 0.3119 | 0.5672 | | 3.1657 | 18000 | 0.1392 | 0.3052 | 0.5876 | | 3.2536 | 18500 | 0.1411 | 0.3030 | 0.6064 | | 3.3415 | 19000 | 0.1356 | 0.3064 | 0.5535 | | 3.4295 | 19500 | 0.14 | 0.3144 | 0.5978 | | 3.5174 | 20000 | 0.1461 | 0.3332 | 0.5961 | | 3.6053 | 20500 | 0.1468 | 0.3179 | 0.5975 | | 3.6933 | 21000 | 0.1487 | 0.3327 | 0.5932 | | 3.7812 | 21500 | 0.1479 | 0.3340 | 0.5888 | | 3.8692 | 22000 | 0.1458 | 0.3172 | 0.5478 | | 3.9571 | 22500 | 0.1566 | 0.3036 | 0.5926 | | 4.0450 | 23000 | 0.1257 | 0.3552 | 0.5941 | | 4.1330 | 23500 | 0.1004 | 0.3886 | 0.5067 | | 4.2209 | 24000 | 0.1061 | 0.3682 | 0.5654 | | 4.3088 | 24500 | 0.1087 | 0.3212 | 0.5556 | | 4.3968 | 25000 | 0.11 | 0.3348 | 0.5628 | | 4.4847 | 25500 | 0.1108 | 0.3740 | 0.5046 | | 4.5726 | 26000 | 0.1169 | 0.3092 | 0.5882 | | 4.6606 | 26500 | 0.1156 | 0.3498 | 0.4988 | | 4.7485 | 27000 | 0.1232 | 0.3042 | 0.5801 | | 4.8364 | 27500 | 0.1195 | 0.3685 | 0.5793 | | 4.9244 | 28000 | 0.122 | 0.3199 | 0.5383 | | 5.0123 | 28500 | 0.1151 | 0.4291 | 0.5510 | | 5.1002 | 29000 | 0.0815 | 0.4297 | 0.4973 | | 5.1882 | 29500 | 0.086 | 0.4798 | 0.4969 | | 5.2761 | 30000 | 0.0892 | 0.4475 | 0.5230 | | 5.3641 | 30500 | 0.0888 | 0.4165 | 0.4267 | | 5.4520 | 31000 | 0.0929 | 0.4398 | 0.4674 | | 5.5399 | 31500 | 0.0929 | 0.4551 | 0.4629 | | 5.6279 | 32000 | 0.0928 | 0.3756 | 0.4537 | | 5.7158 | 32500 | 0.0961 | 0.4014 | 0.5037 | | 5.8037 | 33000 | 0.0924 | 0.3953 | 0.5158 | | 5.8917 | 33500 | 0.0988 | 0.3890 | 0.5355 | | 5.9796 | 34000 | 0.0963 | 0.3823 | 0.5130 | | 6.0675 | 34500 | 0.0738 | 0.4251 | 0.4924 | | 6.1555 | 35000 | 0.0681 | 0.4444 | 0.4891 | | 6.2434 | 35500 | 0.0703 | 0.4472 | 0.4994 | | 6.3313 | 36000 | 0.071 | 0.4552 | 0.4920 | | 6.4193 | 36500 | 0.0706 | 0.4149 | 0.4726 | | 6.5072 | 37000 | 0.0751 | 0.3840 | 0.4771 | | 6.5951 | 37500 | 0.0708 | 0.4455 | 0.5152 | | 6.6831 | 38000 | 0.0775 | 0.4124 | 0.4290 | | 6.7710 | 38500 | 0.0766 | 0.4004 | 0.4459 | | 6.8590 | 39000 | 0.0811 | 0.4209 | 0.4192 | | 6.9469 | 39500 | 0.0766 | 0.4294 | 0.4805 | | 7.0348 | 40000 | 0.07 | 0.4470 | 0.4623 | | 7.1228 | 40500 | 0.05 | 0.5520 | 0.4211 | | 7.2107 | 41000 | 0.0555 | 0.4425 | 0.3890 | | 7.2986 | 41500 | 0.057 | 0.5324 | 0.4204 | | 7.3866 | 42000 | 0.06 | 0.4664 | 0.4517 | | 7.4745 | 42500 | 0.0583 | 0.4506 | 0.4966 | | 7.5624 | 43000 | 0.0582 | 0.4441 | 0.4659 | | 7.6504 | 43500 | 0.0615 | 0.4528 | 0.4495 | | 7.7383 | 44000 | 0.0614 | 0.4744 | 0.4350 | | 7.8262 | 44500 | 0.0605 | 0.4272 | 0.4630 | | 7.9142 | 45000 | 0.0625 | 0.4709 | 0.4414 | | 8.0021 | 45500 | 0.065 | 0.4513 | 0.4060 | | 8.0900 | 46000 | 0.0412 | 0.6073 | 0.3839 | | 8.1780 | 46500 | 0.0431 | 0.5060 | 0.3656 | | 8.2659 | 47000 | 0.0425 | 0.5438 | 0.4042 | | 8.3539 | 47500 | 0.0462 | 0.5835 | 0.4171 | | 8.4418 | 48000 | 0.0475 | 0.5035 | 0.4144 | | 8.5297 | 48500 | 0.0476 | 0.5046 | 0.4105 | | 8.6177 | 49000 | 0.0483 | 0.5080 | 0.4071 | | 8.7056 | 49500 | 0.0487 | 0.5682 | 0.4130 | | 8.7935 | 50000 | 0.049 | 0.5026 | 0.4283 | | 8.8815 | 50500 | 0.0517 | 0.4920 | 0.3529 | | 8.9694 | 51000 | 0.0495 | 0.4956 | 0.4038 | | 9.0573 | 51500 | 0.0378 | 0.5368 | 0.3654 | | 9.1453 | 52000 | 0.0328 | 0.4895 | 0.3775 | | 9.2332 | 52500 | 0.0337 | 0.5245 | 0.4051 | | 9.3211 | 53000 | 0.0361 | 0.5925 | 0.3984 | | 9.4091 | 53500 | 0.0369 | 0.5197 | 0.4134 | | 9.4970 | 54000 | 0.0388 | 0.5246 | 0.4186 | | 9.5849 | 54500 | 0.0364 | 0.5243 | 0.4245 | | 9.6729 | 55000 | 0.0373 | 0.5164 | 0.4119 | | 9.7608 | 55500 | 0.0358 | 0.6019 | 0.4171 | | 9.8488 | 56000 | 0.0364 | 0.6166 | 0.4050 | | 9.9367 | 56500 | 0.0406 | 0.5238 | 0.4329 | | 10.0246 | 57000 | 0.0361 | 0.6156 | 0.4138 | | 10.1126 | 57500 | 0.0267 | 0.5612 | 0.4073 | | 10.2005 | 58000 | 0.023 | 0.6370 | 0.4049 | | 10.2884 | 58500 | 0.0293 | 0.5876 | 0.4069 | | 10.3764 | 59000 | 0.0255 | 0.6200 | 0.4239 | | 10.4643 | 59500 | 0.0282 | 0.5882 | 0.4085 | | 10.5522 | 60000 | 0.0307 | 0.5499 | 0.4084 | | 10.6402 | 60500 | 0.0294 | 0.6012 | 0.3956 | | 10.7281 | 61000 | 0.0283 | 0.6330 | 0.4027 | | 10.8160 | 61500 | 0.0323 | 0.5620 | 0.4037 | | 10.9040 | 62000 | 0.0305 | 0.6073 | 0.4067 | | 10.9919 | 62500 | 0.0284 | 0.5969 | 0.4048 | | 11.0798 | 63000 | 0.0194 | 0.6831 | 0.4041 | | 11.1678 | 63500 | 0.0209 | 0.6346 | 0.3937 | | 11.2557 | 64000 | 0.0183 | 0.6610 | 0.3691 | | 11.3437 | 64500 | 0.0221 | 0.6509 | 0.3755 | | 11.4316 | 65000 | 0.0217 | 0.7004 | 0.4256 | | 11.5195 | 65500 | 0.0239 | 0.5978 | 0.4087 | | 11.6075 | 66000 | 0.0234 | 0.6237 | 0.3687 | | 11.6954 | 66500 | 0.0222 | 0.5774 | 0.4177 | | 11.7833 | 67000 | 0.0234 | 0.6203 | 0.4368 | | 11.8713 | 67500 | 0.0216 | 0.5981 | 0.4396 | | 11.9592 | 68000 | 0.0235 | 0.5636 | 0.4338 | | 12.0471 | 68500 | 0.0193 | 0.6815 | 0.4295 | | 12.1351 | 69000 | 0.0154 | 0.6883 | 0.4516 | | 12.2230 | 69500 | 0.0153 | 0.7075 | 0.4128 | | 12.3109 | 70000 | 0.0155 | 0.6650 | 0.4300 | | 12.3989 | 70500 | 0.0147 | 0.7161 | 0.4029 | | 12.4868 | 71000 | 0.015 | 0.7274 | 0.4082 | | 12.5747 | 71500 | 0.0172 | 0.6526 | 0.3834 | | 12.6627 | 72000 | 0.0156 | 0.6420 | 0.3574 | | 12.7506 | 72500 | 0.0158 | 0.6716 | 0.3905 | | 12.8386 | 73000 | 0.0165 | 0.6757 | 0.3805 | | 12.9265 | 73500 | 0.0144 | 0.6964 | 0.3932 | | 13.0144 | 74000 | 0.0133 | 0.7359 | 0.3913 | | 13.1024 | 74500 | 0.0137 | 0.7126 | 0.4071 | | 13.1903 | 75000 | 0.0118 | 0.7234 | 0.4115 | | 13.2782 | 75500 | 0.0117 | 0.7391 | 0.4225 | | 13.3662 | 76000 | 0.0123 | 0.7435 | 0.3931 | | 13.4541 | 76500 | 0.0121 | 0.7334 | 0.4033 | | 13.5420 | 77000 | 0.0114 | 0.7370 | 0.3965 | | 13.6300 | 77500 | 0.0107 | 0.7646 | 0.4340 | | 13.7179 | 78000 | 0.0123 | 0.7255 | 0.4015 | | 13.8058 | 78500 | 0.0129 | 0.6944 | 0.3901 | | 13.8938 | 79000 | 0.0097 | 0.7561 | 0.4181 | | 13.9817 | 79500 | 0.0121 | 0.7178 | 0.3991 | | 14.0696 | 80000 | 0.0087 | 0.7505 | 0.3858 | | 14.1576 | 80500 | 0.0071 | 0.7765 | 0.3827 | | 14.2455 | 81000 | 0.0082 | 0.7851 | 0.3812 | | 14.3335 | 81500 | 0.0094 | 0.7683 | 0.3877 | | 14.4214 | 82000 | 0.0076 | 0.7705 | 0.3938 | | 14.5093 | 82500 | 0.0071 | 0.7653 | 0.3916 | | 14.5973 | 83000 | 0.0092 | 0.7557 | 0.3851 | | 14.6852 | 83500 | 0.0058 | 0.7718 | 0.3889 | | 14.7731 | 84000 | 0.0069 | 0.7753 | 0.3895 | | 14.8611 | 84500 | 0.0083 | 0.7706 | 0.3902 | | 14.9490 | 85000 | 0.0075 | 0.7741 | 0.3909 | | -1 | -1 | - | - | 0.6229 | * The bold row denotes the saved checkpoint. </details> ### Framework Versions - Python: 3.11.11 - Sentence Transformers: 4.1.0 - Transformers: 4.52.4 - PyTorch: 2.6.0+cu124 - Accelerate: 1.8.0 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
bennyhobart/Qwen2-0.5B-GRPO-test
bennyhobart
2025-06-20T02:11:07Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "generated_from_trainer", "trl", "grpo", "dataset:AI-MO/NuminaMath-TIR", "arxiv:2402.03300", "base_model:Qwen/Qwen2-0.5B-Instruct", "base_model:finetune:Qwen/Qwen2-0.5B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-06-19T00:39:52Z
--- base_model: Qwen/Qwen2-0.5B-Instruct datasets: AI-MO/NuminaMath-TIR library_name: transformers model_name: Qwen2-0.5B-GRPO-test tags: - generated_from_trainer - trl - grpo licence: license --- # Model Card for Qwen2-0.5B-GRPO-test This model is a fine-tuned version of [Qwen/Qwen2-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2-0.5B-Instruct) on the [AI-MO/NuminaMath-TIR](https://huggingface.co/datasets/AI-MO/NuminaMath-TIR) dataset. It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="bennyhobart/Qwen2-0.5B-GRPO-test", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.18.2 - Transformers: 4.52.4 - Pytorch: 2.7.1 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
rrayhka/Qwen2.5-1.5B-Kemenko-Merged
rrayhka
2025-06-20T02:09:10Z
16
0
transformers
[ "transformers", "pytorch", "safetensors", "qwen2", "text-generation", "text-generation-inference", "unsloth", "trl", "sft", "conversational", "en", "base_model:unsloth/Qwen2.5-1.5B-Instruct-unsloth-bnb-4bit", "base_model:finetune:unsloth/Qwen2.5-1.5B-Instruct-unsloth-bnb-4bit", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-03-06T07:35:24Z
--- base_model: unsloth/Qwen2.5-1.5B-Instruct-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - qwen2 - trl - sft license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** rrayhka - **License:** apache-2.0 - **Finetuned from model :** unsloth/Qwen2.5-1.5B-Instruct-unsloth-bnb-4bit This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
rrayhka/Llama-3.2-3B-Instruct
rrayhka
2025-06-20T02:08:46Z
19
0
transformers
[ "transformers", "pytorch", "safetensors", "llama", "text-generation", "text-generation-inference", "unsloth", "trl", "sft", "conversational", "en", "base_model:unsloth/Llama-3.2-3B-Instruct", "base_model:finetune:unsloth/Llama-3.2-3B-Instruct", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-03-17T05:16:15Z
--- base_model: unsloth/Llama-3.2-3B-Instruct tags: - text-generation-inference - transformers - unsloth - llama - trl - sft license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** rrayhka - **License:** apache-2.0 - **Finetuned from model :** unsloth/Llama-3.2-3B-Instruct This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
quickmt/quickmt-en-ko
quickmt
2025-06-20T02:03:03Z
0
0
null
[ "translation", "en", "ko", "dataset:quickmt/quickmt-train.ko-en", "license:cc-by-4.0", "model-index", "region:us" ]
translation
2025-06-19T21:24:47Z
--- language: - en - ko tags: - translation license: cc-by-4.0 datasets: - quickmt/quickmt-train.ko-en model-index: - name: quickmt-en-ko results: - task: name: Translation eng-kor type: translation args: eng-kor dataset: name: flores101-devtest type: flores_101 args: eng_Latn kor_Hang devtest metrics: - name: CHRF type: chrf value: 56.25 - name: BLEU type: bleu value: 27.03 - name: COMET type: comet value: 86.11 --- # `quickmt-en-ko` Neural Machine Translation Model `quickmt-en-ko` is a reasonably fast and reasonably accurate neural machine translation model for translation from `en` into `ko`. ## Model Information * Trained using [`eole`](https://github.com/eole-nlp/eole) * 185M parameter transformer 'big' with 8 encoder layers and 2 decoder layers * 20k sentencepiece vocabularies * Exported for fast inference to [CTranslate2](https://github.com/OpenNMT/CTranslate2) format * Training data: https://huggingface.co/datasets/quickmt/quickmt-train.ko-en/tree/main See the `eole` model configuration in this repository for further details and the `eole-model` for the raw `eole` (pytorch) model. ## Usage with `quickmt` You must install the Nvidia cuda toolkit first, if you want to do GPU inference. Next, install the `quickmt` python library and download the model: ```bash git clone https://github.com/quickmt/quickmt.git pip install ./quickmt/ quickmt-model-download quickmt/quickmt-en-ko ./quickmt-en-ko ``` Finally use the model in python: ```python from quickmt import Translator # Auto-detects GPU, set to "cpu" to force CPU inference t = Translator("./quickmt-en-ko/", device="auto") # Translate - set beam size to 5 for higher quality (but slower speed) sample_text = 'Dr. Ehud Ur, professor of medicine at Dalhousie University in Halifax, Nova Scotia and chair of the clinical and scientific division of the Canadian Diabetes Association cautioned that the research is still in its early days.' t(sample_text, beam_size=5) # Get alternative translations by sampling # You can pass any cTranslate2 `translate_batch` arguments t([sample_text], sampling_temperature=1.2, beam_size=1, sampling_topk=50, sampling_topp=0.9) > '노바 스코샤 주 핼리팩스에 있는 댈하우지 대학교 의과 교수인 에후드 우르 박사와 캐나다 당뇨병 협회의 임상 및 과학 부문 의장은 이 연구가 아직 초기 단계에 있다고 경고했다.' ``` The model is in `ctranslate2` format, and the tokenizers are `sentencepiece`, so you can use `ctranslate2` directly instead of through `quickmt`. It is also possible to get this model to work with e.g. [LibreTranslate](https://libretranslate.com/) which also uses `ctranslate2` and `sentencepiece`. ## Metrics `bleu` and `chrf2` are calculated with [sacrebleu](https://github.com/mjpost/sacrebleu) on the [Flores200 `devtest` test set](https://huggingface.co/datasets/facebook/flores) ("kor_Hang"->"eng_Latn"). `comet22` with the [`comet`](https://github.com/Unbabel/COMET) library and the [default model](https://huggingface.co/Unbabel/wmt22-comet-da). "Time (s)" is the time in seconds to translate (using `ctranslate2`) the flores-devtest dataset (1012 sentences) on an RTX 4070s GPU with batch size 32 (faster speed is possible using a large batch size). | | bleu | chrf2 | comet22 | Time (s) | |:---------------------------------|-------:|--------:|----------:|-----------:| | quickmt/quickmt-en-ko | 14.97 | 36.96 | 87.10 | 1.36 | | facebook/nllb-200-distilled-600M | 12.15 | 33.66 | 87.39 | 24.75 | | facebook/nllb-200-distilled-1.3B | 13.23 | 35.62 | 88.39 | 40.29 | | facebook/m2m100_418M | 9.91 | 30.69 | 83.20 | 22.23 | | facebook/m2m100_1.2B | 11.35 | 33.26 | 85.65 | 41.15 | `quickmt-en-ko` is the fastest and is higher quality than `m2m100_418m`, `m2m100_1.2B` and `nllb-200-distilled-600M`.
18-Official-mezzo-fun-Viral-videos-Link/FULL.VIDEO.Mezzo.fun.Viral.Video.Tutorial.Official
18-Official-mezzo-fun-Viral-videos-Link
2025-06-20T02:02:36Z
0
0
null
[ "region:us" ]
null
2025-06-20T02:02:14Z
<animated-image data-catalyst=""><a href="https://tinyurl.com/56hn7ue8/?news-viral-video" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
Official-Prajakta-Mali-18-Viral-Video/19.FULL.VIDEO.Prajakta.Mali.Viral.Video.Tutorial.Official
Official-Prajakta-Mali-18-Viral-Video
2025-06-20T02:02:23Z
0
0
null
[ "region:us" ]
null
2025-06-20T02:01:40Z
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a> <animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
lalalaDa/ER-GRPO
lalalaDa
2025-06-20T02:00:29Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "ERGRPO", "trl", "grpo", "conversational", "dataset:knoveleng/open-rs", "arxiv:2402.03300", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-14T15:36:56Z
--- datasets: knoveleng/open-rs library_name: transformers model_name: ER-GRPO tags: - generated_from_trainer - ERGRPO - trl - grpo licence: license --- # Model Card for ER-GRPO This model is a fine-tuned version of [None](https://huggingface.co/None) on the [knoveleng/open-rs](https://huggingface.co/datasets/knoveleng/open-rs) dataset. It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="lalalaDa/ER-GRPO", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.18.1 - Transformers: 4.52.4 - Pytorch: 2.5.1 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
Vortex5/WittyAthena-24b
Vortex5
2025-06-20T01:56:30Z
0
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "mergekit", "merge", "roleplay", "conversational", "arxiv:2203.05482", "base_model:TheDrummer/Cydonia-24B-v3", "base_model:merge:TheDrummer/Cydonia-24B-v3", "base_model:Vortex5/Clockwork-Flower-24B", "base_model:merge:Vortex5/Clockwork-Flower-24B", "base_model:arcee-ai/Arcee-Blitz", "base_model:merge:arcee-ai/Arcee-Blitz", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-20T00:27:39Z
--- base_model: - Vortex5/Clockwork-Flower-24B - TheDrummer/Cydonia-24B-v3 - arcee-ai/Arcee-Blitz library_name: transformers tags: - mergekit - merge - roleplay --- # WittyAthena-24b WittyAthena-24b is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6669a3a617b838fda45637b8/uy99LsTsYasM3VAgkH4sf.png) ## Merge Details ### Merge Method This model was merged using the [Linear](https://arxiv.org/abs/2203.05482) merge method using [arcee-ai/Arcee-Blitz](https://huggingface.co/arcee-ai/Arcee-Blitz) as a base. ### Models Merged The following models were included in the merge: * [Vortex5/Clockwork-Flower-24B](https://huggingface.co/Vortex5/Clockwork-Flower-24B) * [TheDrummer/Cydonia-24B-v3](https://huggingface.co/TheDrummer/Cydonia-24B-v3) ### Configuration The following YAML configuration was used to produce this model: ```yaml base_model: arcee-ai/Arcee-Blitz dtype: bfloat16 merge_method: linear models: - model: arcee-ai/Arcee-Blitz parameters: weight: 0.34 - model: Vortex5/Clockwork-Flower-24B parameters: weight: 0.33 - model: TheDrummer/Cydonia-24B-v3 parameters: weight: 0.33 ```
Montecarlo2024/Llama3.1-8B-Nemotron-Python-v2
Montecarlo2024
2025-06-20T01:56:25Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "text-generation-inference", "unsloth", "conversational", "en", "base_model:nvidia/Llama-3.1-Nemotron-Nano-4B-v1.1", "base_model:finetune:nvidia/Llama-3.1-Nemotron-Nano-4B-v1.1", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-06-20T01:52:36Z
--- base_model: nvidia/Llama-3.1-Nemotron-Nano-4B-v1.1 tags: - text-generation-inference - transformers - unsloth - llama license: apache-2.0 language: - en --- # Uploaded finetuned model - **Developed by:** Montecarlo2024 - **License:** apache-2.0 - **Finetuned from model :** nvidia/Llama-3.1-Nemotron-Nano-4B-v1.1 This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
BootesVoid/cmc447bgg00f1bfif95hjas83_cmc44hzex00h1bfifqlleftlq
BootesVoid
2025-06-20T01:53:12Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-06-20T01:53:08Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: PEACHY --- # Cmc447Bgg00F1Bfif95Hjas83_Cmc44Hzex00H1Bfifqlleftlq <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `PEACHY` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "PEACHY", "lora_weights": "https://huggingface.co/BootesVoid/cmc447bgg00f1bfif95hjas83_cmc44hzex00h1bfifqlleftlq/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('BootesVoid/cmc447bgg00f1bfif95hjas83_cmc44hzex00h1bfifqlleftlq', weight_name='lora.safetensors') image = pipeline('PEACHY').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 2000 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/BootesVoid/cmc447bgg00f1bfif95hjas83_cmc44hzex00h1bfifqlleftlq/discussions) to add images that show off what you’ve made with this LoRA.
Official-Anjana-Kc-18-Viral-Videos/FULL.VIDEO.Anjana.Kc.Viral.Video.Tutorial.Official
Official-Anjana-Kc-18-Viral-Videos
2025-06-20T01:50:11Z
0
0
null
[ "region:us" ]
null
2025-06-20T01:49:53Z
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a> <animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
morturr/Llama-2-7b-hf-PAIR_amazon_dadjokes-COMB-amazon-comb-3-seed-18-2025-06-20
morturr
2025-06-20T01:49:38Z
0
0
peft
[ "peft", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:meta-llama/Llama-2-7b-hf", "base_model:adapter:meta-llama/Llama-2-7b-hf", "license:llama2", "region:us" ]
null
2025-06-20T01:49:21Z
--- library_name: peft license: llama2 base_model: meta-llama/Llama-2-7b-hf tags: - trl - sft - generated_from_trainer model-index: - name: Llama-2-7b-hf-PAIR_amazon_dadjokes-COMB-amazon-comb-3-seed-18-2025-06-20 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Llama-2-7b-hf-PAIR_amazon_dadjokes-COMB-amazon-comb-3-seed-18-2025-06-20 This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 6e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 18 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - PEFT 0.13.2 - Transformers 4.46.1 - Pytorch 2.5.1+cu124 - Datasets 3.0.2 - Tokenizers 0.20.1
18-hot-viral-indian-clip-video/18.LEAKS.VIDEO.hot.viral.indian.clip.video.new.Video.Tutorial.Official
18-hot-viral-indian-clip-video
2025-06-20T01:43:49Z
0
0
null
[ "region:us" ]
null
2025-06-20T01:41:22Z
<animated-image data-catalyst=""><a href="https://tinyurl.com/56hn7ue8/?news-viral-video" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
santoshmds21/bert-phishing-classifier_teacher
santoshmds21
2025-06-20T01:42:58Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "text-classification", "generated_from_trainer", "base_model:google-bert/bert-base-uncased", "base_model:finetune:google-bert/bert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-06-20T01:42:33Z
--- library_name: transformers license: apache-2.0 base_model: google-bert/bert-base-uncased tags: - generated_from_trainer metrics: - accuracy model-index: - name: bert-phishing-classifier_teacher results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-phishing-classifier_teacher This model is a fine-tuned version of [google-bert/bert-base-uncased](https://huggingface.co/google-bert/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.7047 - Accuracy: 0.491 - Auc: 0.75 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Auc | |:-------------:|:-----:|:----:|:---------------:|:--------:|:-----:| | 0.7135 | 1.0 | 263 | 0.6957 | 0.509 | 0.692 | | 0.7053 | 2.0 | 526 | 0.7073 | 0.491 | 0.274 | | 0.7033 | 3.0 | 789 | 0.7039 | 0.509 | 0.701 | | 0.7025 | 4.0 | 1052 | 0.6955 | 0.491 | 0.471 | | 0.6995 | 5.0 | 1315 | 0.7008 | 0.491 | 0.533 | | 0.6993 | 6.0 | 1578 | 0.6982 | 0.491 | 0.708 | | 0.696 | 7.0 | 1841 | 0.6993 | 0.491 | 0.654 | | 0.6939 | 8.0 | 2104 | 0.6954 | 0.491 | 0.705 | | 0.6907 | 9.0 | 2367 | 0.6994 | 0.491 | 0.673 | | 0.6946 | 10.0 | 2630 | 0.7047 | 0.491 | 0.75 | ### Framework versions - Transformers 4.52.4 - Pytorch 2.6.0+cu124 - Datasets 3.6.0 - Tokenizers 0.21.1
LTTEAM/VideoAI
LTTEAM
2025-06-20T01:36:04Z
0
0
diffusers
[ "diffusers", "safetensors", "ltx-video", "image-to-video", "en", "license:other", "diffusers:LTXPipeline", "region:us" ]
image-to-video
2025-06-10T03:37:53Z
--- tags: - ltx-video - image-to-video pinned: true language: - en license: other library_name: diffusers --- # LTX-Video Model Card This model card focuses on the model associated with the LTX-Video model, codebase available [here](https://github.com/Lightricks/LTX-Video). LTX-Video is the first DiT-based video generation model capable of generating high-quality videos in real-time. It produces 30 FPS videos at a 1216×704 resolution faster than they can be watched. Trained on a large-scale dataset of diverse videos, the model generates high-resolution videos with realistic and varied content. <img src="./media/trailer.gif" alt="trailer" width="512"> ### Image-to-video examples | | | | |:---:|:---:|:---:| | ![example1](./media/ltx-video_i2v_example_00001.gif) | ![example2](./media/ltx-video_i2v_example_00002.gif) | ![example3](./media/ltx-video_i2v_example_00003.gif) | | ![example4](./media/ltx-video_i2v_example_00004.gif) | ![example5](./media/ltx-video_i2v_example_00005.gif) | ![example6](./media/ltx-video_i2v_example_00006.gif) | | ![example7](./media/ltx-video_i2v_example_00007.gif) | ![example8](./media/ltx-video_i2v_example_00008.gif) | ![example9](./media/ltx-video_i2v_example_00009.gif) | # Models & Workflows | Name | Notes | inference.py config | ComfyUI workflow (Recommended) | |----------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------| | ltxv-13b-0.9.7-dev | Highest quality, requires more VRAM | [ltxv-13b-0.9.7-dev.yaml](https://github.com/Lightricks/LTX-Video/blob/main/configs/ltxv-13b-0.9.7-dev.yaml) | [ltxv-13b-i2v-base.json](https://github.com/Lightricks/ComfyUI-LTXVideo/blob/master/example_workflows/ltxv-13b-i2v-base.json) | | [ltxv-13b-0.9.7-mix](https://app.ltx.studio/motion-workspace?videoModel=ltxv-13b) | Mix ltxv-13b-dev and ltxv-13b-distilled in the same multi-scale rendering workflow for balanced speed-quality | N/A | [ltxv-13b-i2v-mixed-multiscale.json](https://github.com/Lightricks/ComfyUI-LTXVideo/blob/master/example_workflows/ltxv-13b-i2v-mixed-multiscale.json) | | [ltxv-13b-0.9.7-distilled](https://app.ltx.studio/motion-workspace?videoModel=ltxv) | Faster, less VRAM usage, slight quality reduction compared to 13b. Ideal for rapid iterations | [ltxv-13b-0.9.7-distilled.yaml](https://github.com/Lightricks/LTX-Video/blob/main/configs/ltxv-13b-0.9.7-dev.yaml) | [ltxv-13b-dist-i2v-base.json](https://github.com/Lightricks/ComfyUI-LTXVideo/blob/master/example_workflows/13b-distilled/ltxv-13b-dist-i2v-base.json) | | [ltxv-13b-0.9.7-distilled-lora128](https://huggingface.co/Lightricks/LTX-Video/blob/main/ltxv-13b-0.9.7-distilled-lora128.safetensors) | LoRA to make ltxv-13b-dev behave like the distilled model | N/A | N/A | | ltxv-13b-0.9.7-fp8 | Quantized version of ltxv-13b | Coming soon | [ltxv-13b-i2v-base-fp8.json](https://github.com/Lightricks/ComfyUI-LTXVideo/blob/master/example_workflows/ltxv-13b-i2v-base-fp8.json) | | ltxv-13b-0.9.7-distilled-fp8 | Quantized version of ltxv-13b-distilled | Coming soon | [ltxv-13b-dist-i2v-base-fp8.json](https://github.com/Lightricks/ComfyUI-LTXVideo/blob/master/example_workflows/13b-distilled/ltxv-13b-dist-i2v-base-fp8.json) | | ltxv-2b-0.9.6 | Good quality, lower VRAM requirement than ltxv-13b | [ltxv-2b-0.9.6-dev.yaml](https://github.com/Lightricks/LTX-Video/blob/main/configs/ltxv-2b-0.9.6-dev.yaml) | [ltxvideo-i2v.json](https://github.com/Lightricks/ComfyUI-LTXVideo/blob/master/example_workflows/low_level/ltxvideo-i2v.json) | | ltxv-2b-0.9.6-distilled | 15× faster, real-time capable, fewer steps needed, no STG/CFG required | [ltxv-2b-0.9.6-distilled.yaml](https://github.com/Lightricks/LTX-Video/blob/main/configs/ltxv-2b-0.9.6-distilled.yaml) | [ltxvideo-i2v-distilled.json](https://github.com/Lightricks/ComfyUI-LTXVideo/blob/master/example_workflows/low_level/ltxvideo-i2v-distilled.json) | ## Model Details - **Developed by:** Lightricks - **Model type:** Diffusion-based image-to-video generation model - **Language(s):** English ## Usage ### Direct use You can use the model for purposes under the license: - 2B version 0.9: [license](https://huggingface.co/Lightricks/LTX-Video/blob/main/ltx-video-2b-v0.9.license.txt) - 2B version 0.9.1 [license](https://huggingface.co/Lightricks/LTX-Video/blob/main/ltx-video-2b-v0.9.1.license.txt) - 2B version 0.9.5 [license](https://huggingface.co/Lightricks/LTX-Video/blob/main/ltx-video-2b-v0.9.5.license.txt) - 2B version 0.9.6-dev [license](https://huggingface.co/Lightricks/LTX-Video/blob/main/ltxv-2b-0.9.6-dev-04-25.license.txt) - 2B version 0.9.6-distilled [license](https://huggingface.co/Lightricks/LTX-Video/blob/main/ltxv-2b-0.9.6-distilled-04-25.license.txt) - 13B version 0.9.7-dev [license](https://huggingface.co/Lightricks/LTX-Video/blob/main/ltxv-13b-0.9.7-dev.license.txt) - 13B version 0.9.7-dev-fp8 [license](https://huggingface.co/Lightricks/LTX-Video/blob/main/ltxv-13b-0.9.7-dev-fp8.license.txt) - 13B version 0.9.7-distilled [license](https://huggingface.co/Lightricks/LTX-Video/blob/main/ltxv-13b-0.9.7-distilled.license.txt) - 13B version 0.9.7-distilled-fp8 [license](https://huggingface.co/Lightricks/LTX-Video/blob/main/ltxv-13b-0.9.7-distilled-fp8.license.txt) - 13B version 0.9.7-distilled-lora128 [license](https://huggingface.co/Lightricks/LTX-Video/blob/main/ltxv-13b-0.9.7-distilled-lora128.license.txt) - Temporal upscaler version 0.9.7 [license](https://huggingface.co/Lightricks/LTX-Video/blob/main/ltxv-temporal-upscaler-0.9.7.license.txt) - Spatial upscaler version 0.9.7 [license](https://huggingface.co/Lightricks/LTX-Video/blob/main/ltxv-spatial-upscaler-0.9.7.license.txt) ### General tips: * The model works on resolutions that are divisible by 32 and number of frames that are divisible by 8 + 1 (e.g. 257). In case the resolution or number of frames are not divisible by 32 or 8 + 1, the input will be padded with -1 and then cropped to the desired resolution and number of frames. * The model works best on resolutions under 720 x 1280 and number of frames below 257. * Prompts should be in English. The more elaborate the better. Good prompt looks like `The turquoise waves crash against the dark, jagged rocks of the shore, sending white foam spraying into the air. The scene is dominated by the stark contrast between the bright blue water and the dark, almost black rocks. The water is a clear, turquoise color, and the waves are capped with white foam. The rocks are dark and jagged, and they are covered in patches of green moss. The shore is lined with lush green vegetation, including trees and bushes. In the background, there are rolling hills covered in dense forest. The sky is cloudy, and the light is dim.` ### Online demo The model is accessible right away via the following links: - [LTX-Studio image-to-video (13B-mix)](https://app.ltx.studio/motion-workspace?videoModel=ltxv-13b) - [LTX-Studio image-to-video (13B distilled)](https://app.ltx.studio/motion-workspace?videoModel=ltxv) - [Fal.ai image-to-video (13B full)](https://fal.ai/models/fal-ai/ltx-video-13b-dev/image-to-video) - [Fal.ai image-to-video (13B distilled)](https://fal.ai/models/fal-ai/ltx-video-13b-distilled/image-to-video) - [Replicate image-to-video](https://replicate.com/lightricks/ltx-video) ### ComfyUI To use our model with ComfyUI, please follow the instructions at a dedicated [ComfyUI repo](https://github.com/Lightricks/ComfyUI-LTXVideo/). ### Run locally #### Installation The codebase was tested with Python 3.10.5, CUDA version 12.2, and supports PyTorch >= 2.1.2. ```bash git clone https://github.com/Lightricks/LTX-Video.git cd LTX-Video # create env python -m venv env source env/bin/activate python -m pip install -e .\[inference-script\] ``` #### Inference To use our model, please follow the inference code in [inference.py](https://github.com/Lightricks/LTX-Video/blob/main/inference.py): #### For image-to-video generation: ```bash python inference.py --prompt "PROMPT" --input_image_path IMAGE_PATH --height HEIGHT --width WIDTH --num_frames NUM_FRAMES --seed SEED --pipeline_config configs/ltxv-13b-0.9.7-distilled.yaml ``` #### For video generation with multiple conditions: You can now generate a video conditioned on a set of images and/or short video segments. Simply provide a list of paths to the images or video segments you want to condition on, along with their target frame numbers in the generated video. You can also specify the conditioning strength for each item (default: 1.0). ```bash python inference.py --prompt "PROMPT" --conditioning_media_paths IMAGE_OR_VIDEO_PATH_1 IMAGE_OR_VIDEO_PATH_2 --conditioning_start_frames TARGET_FRAME_1 TARGET_FRAME_2 --height HEIGHT --width WIDTH --num_frames NUM_FRAMES --seed SEED --pipeline_config configs/ltxv-13b-0.9.7-distilled.yaml ``` ### Diffusers 🧨 LTX Video is compatible with the [Diffusers Python library](https://huggingface.co/docs/diffusers/main/en/index) for image-to-video generation. Make sure you install `diffusers` before trying out the examples below. ```bash pip install -U git+https://github.com/huggingface/diffusers ``` Now, you can run the examples below (note that the upsampling stage is optional but reccomeneded): ### For image-to-video: ```py import torch from diffusers import LTXConditionPipeline, LTXLatentUpsamplePipeline from diffusers.pipelines.ltx.pipeline_ltx_condition import LTXVideoCondition from diffusers.utils import export_to_video, load_image, load_video pipe = LTXConditionPipeline.from_pretrained("Lightricks/LTX-Video-0.9.7-dev", torch_dtype=torch.bfloat16) pipe_upsample = LTXLatentUpsamplePipeline.from_pretrained("Lightricks/ltxv-spatial-upscaler-0.9.7", vae=pipe.vae, torch_dtype=torch.bfloat16) pipe.to("cuda") pipe_upsample.to("cuda") pipe.vae.enable_tiling() def round_to_nearest_resolution_acceptable_by_vae(height, width): height = height - (height % pipe.vae_spatial_compression_ratio) width = width - (width % pipe.vae_spatial_compression_ratio) return height, width image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/penguin.png") video = load_video(export_to_video([image])) # compress the image using video compression as the model was trained on videos condition1 = LTXVideoCondition(video=video, frame_index=0) prompt = "A cute little penguin takes out a book and starts reading it" negative_prompt = "worst quality, inconsistent motion, blurry, jittery, distorted" expected_height, expected_width = 480, 832 downscale_factor = 2 / 3 num_frames = 96 # Part 1. Generate video at smaller resolution downscaled_height, downscaled_width = int(expected_height * downscale_factor), int(expected_width * downscale_factor) downscaled_height, downscaled_width = round_to_nearest_resolution_acceptable_by_vae(downscaled_height, downscaled_width) latents = pipe( conditions=[condition1], prompt=prompt, negative_prompt=negative_prompt, width=downscaled_width, height=downscaled_height, num_frames=num_frames, num_inference_steps=30, generator=torch.Generator().manual_seed(0), output_type="latent", ).frames # Part 2. Upscale generated video using latent upsampler with fewer inference steps # The available latent upsampler upscales the height/width by 2x upscaled_height, upscaled_width = downscaled_height * 2, downscaled_width * 2 upscaled_latents = pipe_upsample( latents=latents, output_type="latent" ).frames # Part 3. Denoise the upscaled video with few steps to improve texture (optional, but recommended) video = pipe( conditions=[condition1], prompt=prompt, negative_prompt=negative_prompt, width=upscaled_width, height=upscaled_height, num_frames=num_frames, denoise_strength=0.4, # Effectively, 4 inference steps out of 10 num_inference_steps=10, latents=upscaled_latents, decode_timestep=0.05, image_cond_noise_scale=0.025, generator=torch.Generator().manual_seed(0), output_type="pil", ).frames[0] # Part 4. Downscale the video to the expected resolution video = [frame.resize((expected_width, expected_height)) for frame in video] export_to_video(video, "output.mp4", fps=24) ``` ### For video-to-video: ```py import torch from diffusers import LTXConditionPipeline, LTXLatentUpsamplePipeline from diffusers.pipelines.ltx.pipeline_ltx_condition import LTXVideoCondition from diffusers.utils import export_to_video, load_video pipe = LTXConditionPipeline.from_pretrained("Lightricks/LTX-Video-0.9.7-dev", torch_dtype=torch.bfloat16) pipe_upsample = LTXLatentUpsamplePipeline.from_pretrained("Lightricks/ltxv-spatial-upscaler-0.9.7", vae=pipe.vae, torch_dtype=torch.bfloat16) pipe.to("cuda") pipe_upsample.to("cuda") pipe.vae.enable_tiling() def round_to_nearest_resolution_acceptable_by_vae(height, width): height = height - (height % pipe.vae_spatial_compression_ratio) width = width - (width % pipe.vae_spatial_compression_ratio) return height, width video = load_video( "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/cosmos/cosmos-video2world-input-vid.mp4" )[:21] # Use only the first 21 frames as conditioning condition1 = LTXVideoCondition(video=video, frame_index=0) prompt = "The video depicts a winding mountain road covered in snow, with a single vehicle traveling along it. The road is flanked by steep, rocky cliffs and sparse vegetation. The landscape is characterized by rugged terrain and a river visible in the distance. The scene captures the solitude and beauty of a winter drive through a mountainous region." negative_prompt = "worst quality, inconsistent motion, blurry, jittery, distorted" expected_height, expected_width = 768, 1152 downscale_factor = 2 / 3 num_frames = 161 # Part 1. Generate video at smaller resolution downscaled_height, downscaled_width = int(expected_height * downscale_factor), int(expected_width * downscale_factor) downscaled_height, downscaled_width = round_to_nearest_resolution_acceptable_by_vae(downscaled_height, downscaled_width) latents = pipe( conditions=[condition1], prompt=prompt, negative_prompt=negative_prompt, width=downscaled_width, height=downscaled_height, num_frames=num_frames, num_inference_steps=30, generator=torch.Generator().manual_seed(0), output_type="latent", ).frames # Part 2. Upscale generated video using latent upsampler with fewer inference steps # The available latent upsampler upscales the height/width by 2x upscaled_height, upscaled_width = downscaled_height * 2, downscaled_width * 2 upscaled_latents = pipe_upsample( latents=latents, output_type="latent" ).frames # Part 3. Denoise the upscaled video with few steps to improve texture (optional, but recommended) video = pipe( conditions=[condition1], prompt=prompt, negative_prompt=negative_prompt, width=upscaled_width, height=upscaled_height, num_frames=num_frames, denoise_strength=0.4, # Effectively, 4 inference steps out of 10 num_inference_steps=10, latents=upscaled_latents, decode_timestep=0.05, image_cond_noise_scale=0.025, generator=torch.Generator().manual_seed(0), output_type="pil", ).frames[0] # Part 4. Downscale the video to the expected resolution video = [frame.resize((expected_width, expected_height)) for frame in video] export_to_video(video, "output.mp4", fps=24) ``` To learn more, check out the [official documentation](https://huggingface.co/docs/diffusers/main/en/api/pipelines/ltx_video). Diffusers also supports directly loading from the original LTX checkpoints using the `from_single_file()` method. Check out [this section](https://huggingface.co/docs/diffusers/main/en/api/pipelines/ltx_video#loading-single-files) to learn more. ## Limitations - This model is not intended or able to provide factual information. - As a statistical model this checkpoint might amplify existing societal biases. - The model may fail to generate videos that matches the prompts perfectly. - Prompt following is heavily influenced by the prompting-style.
sergioalves/f0d260ca-7492-4448-82ca-c95082e8c93c
sergioalves
2025-06-20T01:35:31Z
0
0
peft
[ "peft", "safetensors", "qwen2", "axolotl", "generated_from_trainer", "base_model:unsloth/Qwen2.5-0.5B-Instruct", "base_model:adapter:unsloth/Qwen2.5-0.5B-Instruct", "license:apache-2.0", "4-bit", "bitsandbytes", "region:us" ]
null
2025-06-20T01:20:04Z
--- library_name: peft license: apache-2.0 base_model: unsloth/Qwen2.5-0.5B-Instruct tags: - axolotl - generated_from_trainer model-index: - name: f0d260ca-7492-4448-82ca-c95082e8c93c results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml absolute_data_files: false adapter: lora base_model: unsloth/Qwen2.5-0.5B-Instruct bf16: true chat_template: llama3 dataset_prepared_path: /workspace/axolotl datasets: - data_files: - 94189bc178f68e6e_train_data.json ds_type: json format: custom path: /workspace/input_data/ type: field_instruction: instruct field_output: output format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null dpo: beta: 0.05 enabled: true group_by_length: false rank_loss: true reference_model: NousResearch/Meta-Llama-3-8B-Instruct early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true gradient_clipping: 1.0 group_by_length: false hub_model_id: sergioalves/f0d260ca-7492-4448-82ca-c95082e8c93c hub_repo: null hub_strategy: end hub_token: null learning_rate: 5.0e-07 load_in_4bit: true load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 64 lora_dropout: 0.1 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 32 lora_target_linear: true lr_scheduler: cosine max_steps: 200 micro_batch_size: 8 mixed_precision: bf16 mlflow_experiment_name: /tmp/94189bc178f68e6e_train_data.json model_type: AutoModelForCausalLM num_epochs: 2 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 1024 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 2977495d-2d1a-46fa-9f7d-05ebe07807e6 wandb_project: s56-7 wandb_run: your_name wandb_runid: 2977495d-2d1a-46fa-9f7d-05ebe07807e6 warmup_steps: 25 weight_decay: 0.05 xformers_attention: false ``` </details><br> # f0d260ca-7492-4448-82ca-c95082e8c93c This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.5467 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-07 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 25 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 1.5025 | 0.0001 | 1 | 1.5701 | | 1.4456 | 0.0142 | 100 | 1.5545 | | 1.3811 | 0.0284 | 200 | 1.5467 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
ssyafiqahsiti/RandomForest_cervical_cancer
ssyafiqahsiti
2025-06-20T01:32:25Z
0
0
null
[ "biology", "medical", "image-classification", "region:us" ]
image-classification
2025-05-22T09:04:29Z
--- pipeline_tag: image-classification tags: - biology - medical --- 🧫 Cervical Cancer Classifier This tool allows users to upload colposcopy images and classify them as Normal or Abnormal using a machine learning model which is RandomForest. 📌 Features ✅ Upload a colposcopy image 🧠 Predict whether the cervix is Normal or Abnormal 📂 How to Use Upload your colposcopy image into the app. The model will predict the condition of the cervix. 🔒 Privacy This demo is for research and demonstration purposes only. Uploaded images are not stored. 👩‍⚕️ Medical Disclaimer This tool is not intended for clinical or diagnostic use. Always consult a qualified medical professional for an accurate diagnosis.
Vykyan/FuseO1-DeepSeekR1-Qwen2.5-Coder-32B-Preview-bnb-4bit-fp4
Vykyan
2025-06-20T01:31:10Z
0
0
null
[ "safetensors", "qwen2", "arxiv:2408.07990", "arxiv:2401.10491", "arxiv:2412.03187", "base_model:FuseAI/FuseO1-DeepSeekR1-Qwen2.5-Coder-32B-Preview", "base_model:quantized:FuseAI/FuseO1-DeepSeekR1-Qwen2.5-Coder-32B-Preview", "license:apache-2.0", "4-bit", "bitsandbytes", "region:us" ]
null
2025-06-20T01:29:39Z
--- base_model: - FuseAI/FuseO1-DeepSeekR1-Qwen2.5-Coder-32B-Preview license: apache-2.0 --- # FuseAI/FuseO1-DeepSeekR1-Qwen2.5-Coder-32B-Preview (Quantized) ## Description This model is a quantized version of the original model [`FuseAI/FuseO1-DeepSeekR1-Qwen2.5-Coder-32B-Preview`](https://huggingface.co/FuseAI/FuseO1-DeepSeekR1-Qwen2.5-Coder-32B-Preview). It's quantized using the BitsAndBytes library to 4-bit using the [bnb-my-repo](https://huggingface.co/spaces/HF-Quantization/bnb-my-repo) space. ## Quantization Details - **Quantization Type**: int4 - **bnb_4bit_quant_type**: fp4 - **bnb_4bit_use_double_quant**: True - **bnb_4bit_compute_dtype**: bfloat16 - **bnb_4bit_quant_storage**: int8 # 📄 Original Model Information <p align="center" width="100%"> </p> <div id="top" align="center"> FuseO1-Preview: System-II Reasoning Fusion of LLMs ----------------------------- <h4> |<a href="https://arxiv.org/abs/2408.07990"> 📑 Paper </a> | <a href="https://github.com/fanqiwan/FuseAI"> 🐱 GitHub Repo </a> | <a href="https://huggingface.co/FuseAI"> 🤗 Hugging Face </a> | <a href="https://huggingface.co/blog/Wanfq/fuseo1-preview"> 🌐 Blog </a> | </h4> <!-- **Authors:** --> _Fanqi Wan, Longguang Zhong, Ziyi Yang, Weizhou Shen, Xinting Huang_ <!-- **Affiliations:** --> _FuseAI Team_ </div> <p align="center"> <img src="./assets/fuseo1-preview.jpg" width="100%"> <br> </p> ## Overview [FuseO1-Preview](https://huggingface.co/collections/FuseAI/fuseo1-preview-678eb56093649b2688bc9977) is our initial endeavor to enhance the System-II reasoning capabilities of large language models (LLMs) through innovative model fusion techniques. By employing our advanced [SCE](https://arxiv.org/abs/2408.07990) merging methodologies, we integrate multiple open-source o1-like LLMs into a unified model. Our goal is to incorporate the distinct knowledge and strengths from different reasoning LLMs into a single, unified model with strong System-II reasoning abilities, particularly in mathematics, coding, and science domains. <p align="center"> <img src="./assets/sce.jpg" width="70%"> <br> </p> To achieve this, we conduct two types of model merging: - **Long-Long Reasoning Merging**: This approach involves model fusion across LLMs that utilize long-CoT reasoning, with the goal of enhancing long-CoT reasoning capabilities. The resulted [FuseAI/FuseO1-DeepSeekR1-QwQ-SkyT1-32B-Preview](https://huggingface.co/FuseAI/FuseO1-DeepSeekR1-QwQ-SkyT1-32B-Preview) achieves a Pass@1 accuracy of **74.0 on AIME24**, demonstrating significant performance improvements compared to the OpenAI o1-preview (44.6) and OpenAI o1-mini (63.4), even approaching OpenAI o1 (79.2). - **Long-Short Reasoning Merging**: This approach involves model fusion between long-CoT and short-CoT LLMs, aiming to improve reasoning capabilities in both long and short reasoning processes. The resulted [FuseAI/FuseO1-DeepSeekR1-Qwen2.5-Instruct-32B-Preview](https://huggingface.co/FuseAI/FuseO1-DeepSeekR1-Qwen2.5-Instruct-32B-Preview) and [FuseAI/FuseO1-DeepSeekR1-Qwen2.5-Coder-32B-Preview](https://huggingface.co/FuseAI/FuseO1-DeepSeekR1-Qwen2.5-Coder-32B-Preview) is capable of utilizing both long and short reasoning processes and demonstrates relatively strong performance in long reasoning tasks. | Model | Merge Type | Source Models | HF Link | |:----- | ---- | ---- | ---- | | [FuseAI/FuseO1-DeepSeekR1-QwQ-SkyT1-32B-Preview](https://huggingface.co/FuseAI/FuseO1-DeepSeekR1-QwQ-SkyT1-32B-Preview) | Long-Long Reasoning Merge | [deepseek-ai/DeepSeek-R1-Distill-Qwen-32B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-32B), [Qwen/QwQ-32B-Preview](https://huggingface.co/Qwen/QwQ-32B-Preview), [NovaSky-AI/Sky-T1-32B-Preview](https://huggingface.co/NovaSky-AI/Sky-T1-32B-Preview) | [🤗 Hugging Face](https://huggingface.co/FuseAI/FuseO1-DeepSeekR1-QwQ-SkyT1-32B-Preview), [GGUF](https://huggingface.co/FuseAI/FuseO1-DeepSeekR1-QwQ-SkyT1-32B-Preview-GGUF) | | [FuseAI/FuseO1-DeepSeekR1-QwQ-32B-Preview](https://huggingface.co/FuseAI/FuseO1-DeepSeekR1-QwQ-32B-Preview) | Long-Long Reasoning Merge | [deepseek-ai/DeepSeek-R1-Distill-Qwen-32B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-32B), [Qwen/QwQ-32B-Preview](https://huggingface.co/Qwen/QwQ-32B-Preview) | [🤗 Hugging Face](https://huggingface.co/FuseAI/FuseO1-DeepSeekR1-QwQ-32B-Preview) | | [FuseAI/FuseO1-DeepSeekR1-QwQ-SkyT1-Flash-32B-Preview](https://huggingface.co/FuseAI/FuseO1-DeepSeekR1-QwQ-SkyT1-Flash-32B-Preview) | Long-Short Reasoning Merge | [deepseek-ai/DeepSeek-R1-Distill-Qwen-32B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-32B), [Qwen/QwQ-32B-Preview](https://huggingface.co/Qwen/QwQ-32B-Preview), [NovaSky-AI/Sky-T1-32B-Flash](https://huggingface.co/NovaSky-AI/Sky-T1-32B-Flash) | [🤗 Hugging Face](https://huggingface.co/FuseAI/FuseO1-DeepSeekR1-QwQ-SkyT1-Flash-32B-Preview) | | [FuseAI/FuseO1-DeepSeekR1-Qwen2.5-Instruct-32B-Preview](https://huggingface.co/FuseAI/FuseO1-DeepSeekR1-Qwen2.5-Instruct-32B-Preview) | Long-Short Reasoning Merge | [deepseek-ai/DeepSeek-R1-Distill-Qwen-32B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-32B), [Qwen/Qwen2.5-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct) | [🤗 Hugging Face](https://huggingface.co/FuseAI/FuseO1-DeepSeekR1-Qwen2.5-Instruct-32B-Preview) | | [FuseAI/FuseO1-DeepSeekR1-Qwen2.5-Coder-32B-Preview](https://huggingface.co/FuseAI/FuseO1-DeepSeekR1-Qwen2.5-Coder-32B-Preview) | Long-Short Reasoning Merge | [deepseek-ai/DeepSeek-R1-Distill-Qwen-32B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-32B), [Qwen/Qwen2.5-32B-Coder](https://huggingface.co/Qwen/Qwen2.5-32B-Coder) | [🤗 Hugging Face](https://huggingface.co/FuseAI/FuseO1-DeepSeekR1-Qwen2.5-Coder-32B-Preview) | ## Long-Long Reasoning Merging We conduct experiments on these folloing long-cot LLMs. - [deepseek-ai/DeepSeek-R1-Distill-Qwen-32B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-32B) - [Qwen/QwQ-32B-Preview](https://huggingface.co/Qwen/QwQ-32B-Preview) - [NovaSky-AI/Sky-T1-32B-Preview](https://huggingface.co/NovaSky-AI/Sky-T1-32B-Preview) To reproduce the merged [FuseAI/FuseO1-DeepSeekR1-QwQ-SkyT1-32B-Preview](https://huggingface.co/FuseAI/FuseO1-DeepSeekR1-QwQ-SkyT1-32B-Preview) model, using the script below. ```sh cd FuseAI/FuseO1-Preview/mergekit pip3 install -e . model_save_dir=xx # your path to save the merged models mergekit-yaml fuseo1_configs/FuseO1-DeepSeekR1-QwQ-SkyT1-32B-Preview.yaml ${model_save_dir}/FuseO1-DeepSeekR1-QwQ-SkyT1-32B-Preview --cudas ``` To reproduce the merged [FuseAI/FuseO1-DeepSeekR1-QwQ-32B-Preview](https://huggingface.co/FuseAI/FuseO1-DeepSeekR1-QwQ-32B-Preview) model, using the script below. ```sh cd FuseAI/FuseO1-Preview/mergekit pip3 install -e . model_save_dir=xxx # your path to save the merged models mergekit-yaml fuseo1_configs/FuseO1-DeepSeekR1-QwQ-32B-Preview.yaml ${model_save_dir}/FuseO1-DeepSeekR1-QwQ-32B-Preview --cuda ``` We provide the example code to use FuseAI/FuseO1-DeepSeekR1-QwQ-SkyT1-32B-Preview. ```python3 from vllm import LLM, SamplingParams llm = LLM(model="FuseAI/FuseO1-DeepSeekR1-QwQ-SkyT1-32B-Preview", tensor_parallel_size=8) sampling_params = SamplingParams(max_tokens=32768, temperature=0.7, stop=["<|im_end|>", "<|end▁of▁sentence|>"], stop_token_ids=[151645, 151643]) conversations = [ [ {"role": "system", "content": "Please reason step by step, and put your final answer within \\boxed{{}}."}, {"role": "user", "content": "Quadratic polynomials $P(x)$ and $Q(x)$ have leading coefficients $2$ and $-2,$ respectively. The graphs of both polynomials pass through the two points $(16,54)$ and $(20,53).$ Find $P(0) + Q(0).$."}, ], ] responses = llm.chat(messages=conversations, sampling_params=sampling_params, use_tqdm=True) for response in responses: print(response.outputs[0].text.strip()) ``` ## Long-Short Reasoning Merging We conduct experiments on these folloing long-cot and short-cot LLMs. - [deepseek-ai/DeepSeek-R1-Distill-Qwen-32B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-32B) - [Qwen/Qwen2.5-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct) - [Qwen/Qwen2.5-32B-Coder](https://huggingface.co/Qwen/Qwen2.5-32B-Coder) To reproduce the merged [FuseAI/FuseO1-DeepSeekR1-QwQ-SkyT1-Flash-32B-Preview](https://huggingface.co/FuseAI/FuseO1-DeepSeekR1-QwQ-SkyT1-Flash-32B-Preview) model, using the script below. ```sh cd FuseAI/FuseO1-Preview/mergekit pip3 install -e . model_save_dir=xxx # your path to save the merged models mergekit-yaml fuseo1_configs/FuseO1-DeepSeekR1-QwQ-SkyT1-Flash-32B-Preview.yaml ${model_save_dir}/FuseO1-DeepSeekR1-QwQ-SkyT1-Flash-32B-Preview --cuda ``` To reproduce the merged [FuseAI/FuseO1-DeepSeekR1-Qwen2.5-Instruct-32B-Preview](https://huggingface.co/FuseAI/FuseO1-DeepSeekR1-Qwen2.5-Instruct-32B-Preview) model, using the script below. ```sh cd FuseAI/FuseO1-Preview/mergekit pip3 install -e . model_save_dir=xxx # your path to save the merged models mergekit-yaml fuseo1_configs/FuseO1-DeepSeekR1-Qwen2.5-Instruct-32B-Preview.yaml ${model_save_dir}/FuseO1-DeepSeekR1-Qwen2.5-Instruct-32B-Preview --cuda ``` To reproduce the merged [FuseAI/FuseO1-DeepSeekR1-Qwen2.5-Coder-32B-Preview](https://huggingface.co/FuseAI/FuseO1-DeepSeekR1-Qwen2.5-Coder-32B-Preview) model, using the script below. ```sh cd FuseAI/FuseO1-Preview/mergekit pip3 install -e . model_save_dir=xxx # your path to save the merged models mergekit-yaml fuseo1_configs/FuseO1-DeepSeekR1-Qwen2.5-Coder-32B-Preview.yaml ${model_save_dir}/FuseO1-DeepSeekR1-Qwen2.5-Coder-32B-Preview --cuda ``` We provide the code to use FuseAI/FuseO1-DeepSeekR1-Qwen2.5-Instruct-32B-Preview. ```python3 from vllm import LLM, SamplingParams llm = LLM(model="FuseAI/FuseO1-DeepSeekR1-Qwen2.5-Instruct-32B-Preview", tensor_parallel_size=8) sampling_params = SamplingParams(max_tokens=32768, temperature=0.7, stop=["<|im_end|>", "<|end▁of▁sentence|>"], stop_token_ids=[151645, 151643]) conversations = [ [ {"role": "system", "content": "Please reason step by step, and put your final answer within \\boxed{{}}."}, {"role": "user", "content": "Quadratic polynomials $P(x)$ and $Q(x)$ have leading coefficients $2$ and $-2,$ respectively. The graphs of both polynomials pass through the two points $(16,54)$ and $(20,53).$ Find $P(0) + Q(0).$."}, ], ] responses = llm.chat(messages=conversations, sampling_params=sampling_params, use_tqdm=True) for response in responses: print(response.outputs[0].text.strip()) ``` ## Evaluation Results We test the resulted models on three kinds of benchmarks, including **Math Reasoning**, **Code Reasoning** , and **Scientific Reasoning**. Math Reasoning - AIME24 - MATH500 - OlympiadBench Scientific Reasoning - GPQA-Diamond - MMLU-Pro - MMLU Code Reasoning - LiveCodeBench (2408-2502) > Important Note: We manully set `"add_bos_token": false` in `tokenizer_config.json` for all the evaluated LLMs to prevent the bos_token to be added twice for each prompt. Please download and modify to ensure consistency. ### Math Reasoning The evaluation code is modified from [Qwen2.5-Math](https://github.com/QwenLM/Qwen2.5-Math). In our evaluation, we set the temperature to 0.6, the top-p to 0.95 and the max_tokens to 32768. We provide the example to reproduce our results in [math_evaluation](https://github.com/fanqiwan/FuseAI/tree/main/FuseO1-Preview/math_evaluation). The system prompt for evaluation is set to: ```sh Please reason step by step, and put your final answer within \\boxed{{}}. ``` The evaluation results are shown in the table below: In our evaluation of AIME24, we follow the method from DeepSeek-R1, wherein Pass@1 is computed by averaging the results across 32 sampled responses per prompt, while Cons@32 is determined through self-consistency analysis of the same 32 sampled responses for each prompt. For other benchmarks, we only sample 1 response and report the Pass@1. | Models | AIME24 Pass@1 | AIME24 Cons@32 | MATH500 | OlympiadBench | |:------ | --------------| ------------------- | ------------ | -------------- | | OpenAI o1 | 79.2 | - | 96.4 | - | | OpenAI o1-preview | 44.6 | - | 85.5 | - | | OpenAI o1-mini | 63.6 | - | 90.0 | - | | DeepSeek R1 | 79.8 | - | 97.3 | - | | [deepseek-ai/DeepSeek-R1-Distill-Qwen-32B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-32B) | 69.2 | 83.3 | 93.6 | 64.3 | | [Qwen/QwQ-32B-Preview](https://huggingface.co/Qwen/QwQ-32B-Preview) | 43.8 | 56.7 | 88.4 | 60.3 | | [NovaSky-AI/Sky-T1-32B-Preview](https://huggingface.co/NovaSky-AI/Sky-T1-32B-Preview) | 37.7 | 50.0 | 88.0 | 55.1 | | [Qwen/Qwen2.5-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct) | 17.0 | 20.0 | 81.8 | 48.1 | | [FuseAI/FuseO1-DeepSeekR1-Qwen2.5-Instruct-32B-Preview](https://huggingface.co/FuseAI/FuseO1-DeepSeekR1-Qwen2.5-Instruct-32B-Preview) | 68.6 | 83.3 | 94.6 | 64.9 | | [FuseAI/FuseO1-DeepSeekR1-QwQ-32B-Preview](https://huggingface.co/FuseAI/FuseO1-DeepSeekR1-QwQ-32B-Preview) | 69.7 | 83.3 | 94.6 | 64.0 | | [FuseAI/FuseO1-DeepSeekR1-QwQ-SkyT1-Flash-32B-Preview](https://huggingface.co/FuseAI/FuseO1-DeepSeekR1-QwQ-SkyT1-Flash-32B-Preview) | 72.9 | 86.7 | - | - | | [FuseAI/FuseO1-DeepSeekR1-QwQ-SkyT1-32B-Preview](https://huggingface.co/FuseAI/FuseO1-DeepSeekR1-QwQ-SkyT1-32B-Preview) | 74.0 | 86.7 | 94.8 | 65.0 | We show that our merged FuseO1-DeepSeekR1-QwQ-SkyT1-32B-Preview demonstrate superior performance improvements comparet to DeepSeek-R1-Distill-Qwen-32B, QwQ-32B-Preview, and Sky-T1-32B-Preview on math reasoning. Specifically, our model achieves an accuracy of **74.0 Pass@1 and 86.7 Cons@32 on AIME24**, demonstrating significant performance improvements compared to DeepSeek-R1-Distill-Qwen-32B (69.2 Pass@1 and 83.3 Cons@32), OpenAI o1-preview (44.6 Pass@1) and OpenAI o1-mini (63.4 Pass@1), even approaching OpenAI o1 (79.2 Pass@1). ### Scientific Reasoning The evaluation code is modified from [SkyThought](https://github.com/NovaSky-AI/SkyThought). In our evaluation, we set the temperature to 0.7 and the max_tokens to 32768. We provide the example to reproduce our results in [evaluation](https://github.com/fanqiwan/FuseAI/tree/main/FuseO1-Preview/evaluation). The system prompt for evaluation is set to: ```sh You are a helpful and harmless assistant. You should think step-by-step. ``` The evaluation results are shown in the table below: | Models | GPQA-Diamond| MMLU-Pro | MMLU | |:------ | --------------| ------------ | -------------- | | OpenAI o1 | 75.7 | - | 91.8 | | OpenAI o1-preview | 73.3 | - | 90.8 | | OpenAI o1-mini | 60.0 | 80.3 | 85.2 | | DeepSeek R1 | 71.5 | 84.0 | 90.8 | | [deepseek-ai/DeepSeek-R1-Distill-Qwen-32B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-32B) | 57.6 | 68.7 | 82.2 | | [Qwen/QwQ-32B-Preview](https://huggingface.co/Qwen/QwQ-32B-Preview) | 49.5 | 63.5 | 85.2 | | [NovaSky-AI/Sky-T1-32B-Preview](https://huggingface.co/NovaSky-AI/Sky-T1-32B-Preview) | 50.5 | 65.8 | 82.7 | | [Qwen/Qwen2.5-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct) | 46.5 | 56.3 | 79.6 | | [FuseAI/FuseO1-DeepSeekR1-Qwen2.5-Instruct-32B-Preview](https://huggingface.co/FuseAI/FuseO1-DeepSeekR1-Qwen2.5-Instruct-32B-Preview) | 55.1 | 68.6 | 82.0 | | [FuseAI/FuseO1-DeepSeekR1-QwQ-32B-Preview](https://huggingface.co/FuseAI/FuseO1-DeepSeekR1-QwQ-32B-Preview) | 62.1 | 68.9 | 82.7 | | [FuseAI/FuseO1-DeepSeekR1-QwQ-SkyT1-Flash-32B-Preview](https://huggingface.co/FuseAI/FuseO1-DeepSeekR1-QwQ-SkyT1-Flash-32B-Preview) | 54.6 | 70.6 | 84.0 | | [FuseAI/FuseO1-DeepSeekR1-QwQ-SkyT1-32B-Preview](https://huggingface.co/FuseAI/FuseO1-DeepSeekR1-QwQ-SkyT1-32B-Preview) | 62.1 | 70.8 | 83.6 | We show that our merged FuseO1-DeepSeekR1-QwQ-SkyT1-32B-Preview demonstrate superior performance improvements comparet to DeepSeek-R1-Distill-Qwen-32B, QwQ-32B-Preview, and Sky-T1-32B-Preview on scientific reasoning. Specifically, our model achieves an accuracy of **62.1 on GPQA-Diamond and 70.8 on MMLU-Pro**, demonstrating significant performance improvements compared to DeepSeek-R1-Distill-Qwen-32B (57.6 on GPQA-Diamond and 68.7 on MMLU-Pro). ## Code Reasoning The evaluation code is modified from [Qwen2.5-Coder](https://github.com/QwenLM/Qwen2.5-Coder/tree/main/qwencoder-eval/reasoning/livecode_bench_cot). In our evaluation, we set the temperature to 0.6, the top-p to 0.95 and the max_tokens to 32768. We provide the example to reproduce our results in [code_evaluation](https://github.com/fanqiwan/FuseAI/tree/main/FuseO1-Preview/code_evaluation). The system prompt for evaluation is set to: ```sh A conversation between User and Assistant. The user asks a question, and the Assistant solves it. The assistant first thinks about the reasoning process in the mind and then provides the user with the answer. The reasoning process and answer are enclosed within <think> </think> and <answer> </answer> tags, respectively, i.e., <think> reasoning process here </think> <answer> answer here </answer>. ``` In our evaluation of LiveCodeBench, we follow the method from DeepSeek-R1 and make a slight modification. The Pass@1 is computed by averaging the results across 16 sampled responses per prompt. The evaluation results are shown in the table below: | Models | LiveCodeBench | LiveCodeBench-Easy | LiveCodeBench-Medium | LiveCodeBench-Hard | |:------ | --------------| ------------------- | ------------ | -------------- | | OpenAI o1 | 63.4 | 98.5 | 80.9 | 31.7 | | OpenAI o1-preview | 42.7 | 97.0 | 47.2 | 9.8 | | OpenAI o1-mini | 52.00 | 91.0 | 67.4 | 19.5 | | DeepSeek R1 | 62.8 | 98.4 | 78.3 | 32.2 | | [deepseek-ai/DeepSeek-R1-Distill-Qwen-32B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-32B) | 56.1 | 93.6 | 73.1 | 23.4 | | [Qwen/QwQ-32B-Preview](https://huggingface.co/Qwen/QwQ-32B-Preview) | 44.4 | 94.9 | 53.8 | 10.0 | | [NovaSky-AI/Sky-T1-32B-Preview](https://huggingface.co/NovaSky-AI/Sky-T1-32B-Preview) | 37.3 | 89.7 | 40.4 | 6.6 | | [FuseAI/FuseO1-DeepSeekR1-Qwen2.5-Coder-32B-Preview](https://huggingface.co/FuseAI/FuseO1-DeepSeekR1-Qwen2.5-Coder-32B-Preview) | 56.4 | 92.9 | 73.5 | 24.2 | | [FuseAI/FuseO1-DeepSeekR1-QwQ-32B-Preview](https://huggingface.co/FuseAI/FuseO1-DeepSeekR1-QwQ-32B-Preview) | 54.8| 93.9 | 71.7 | 21.3 | | [FuseAI/FuseO1-DeepSeekR1-QwQ-SkyT1-Flash-32B-Preview](https://huggingface.co/FuseAI/FuseO1-DeepSeekR1-QwQ-SkyT1-Flash-32B-Preview) | 58.2 | 94.3 | 77.1 | 25.0 | | [FuseAI/FuseO1-DeepSeekR1-QwQ-SkyT1-32B-Preview](https://huggingface.co/FuseAI/FuseO1-DeepSeekR1-QwQ-SkyT1-32B-Preview) | 57.9 | 93.6 | 76.0 | 25.5 | We show that our merged FuseO1-DeepSeekR1-QwQ-SkyT1-32B-Preview demonstrate superior performance improvements comparet to DeepSeek-R1-Distill-Qwen-32B, QwQ-32B-Preview, and Sky-T1-32B-Preview on scientific reasoning. Specifically, our model achieves an accuracy of **57.9 on LiveCodeBench and 25.5 on LiveCodeBench-Hard**, demonstrating significant performance improvements compared to DeepSeek-R1-Distill-Qwen-32B (56.1 on LiveCodeBench and 23.4 on LiveCodeBench-Hard), OpenAI o1-preview (42.7 on LiveCodeBench and 9.8 on LiveCodeBench-Hard) and OpenAI o1-mini (52.0 on LiveCodeBench and 19.5 on LiveCodeBench-Hard Pass@1). ## Future Works This work is our first attempt effort to achieve knowledge fusion of System-II reasoning LLMs through a model merging approach, which is limited to LLMs with identical scale and architecture. In future work, we plan to employ our [explicit model fusion](https://arxiv.org/abs/2401.10491) method, based on multi-teacher knowledge distillation, and our [implici model fusion](https://arxiv.org/abs/2412.03187) method, which utilizes weighted-reward preference optimization for LLMs with different scales and architectures. Furthermore, we intend to explore the combination of knowledge fusion with reinforcement learning (RL) methods, which have been demonstrated as the most effective approach for enhancing reasoning abilities. Stay tuned for the next version of FuseO1! ## Citations ``` @article{wan2024fusechat, title={Fusechat: Knowledge fusion of chat models}, author={Wan, Fanqi and Zhong, Longguang and Yang, Ziyi and Chen, Ruijun and Quan, Xiaojun}, journal={arXiv preprint arXiv:2408.07990}, year={2024} } ```
segopecelus/783ed834-291c-4a49-bff5-181e88143f8b
segopecelus
2025-06-20T01:22:33Z
0
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:unsloth/Meta-Llama-3.1-8B", "base_model:adapter:unsloth/Meta-Llama-3.1-8B", "license:llama3.1", "region:us" ]
null
2025-06-20T01:17:24Z
--- library_name: peft license: llama3.1 base_model: unsloth/Meta-Llama-3.1-8B tags: - axolotl - generated_from_trainer model-index: - name: 783ed834-291c-4a49-bff5-181e88143f8b results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.10.0.dev0` ```yaml adapter: lora base_model: unsloth/Meta-Llama-3.1-8B bf16: true chat_template: llama3 datasets: - data_files: - b8144f1d1e130bd2_train_data.json ds_type: json format: custom path: /workspace/input_data/ type: field_input: input field_instruction: instruct field_output: output format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' eval_max_new_tokens: 256 evals_per_epoch: 2 flash_attention: false fp16: false gradient_accumulation_steps: 1 gradient_checkpointing: true group_by_length: true hub_model_id: segopecelus/783ed834-291c-4a49-bff5-181e88143f8b learning_rate: 0.0002 logging_steps: 10 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: false lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 11 micro_batch_size: 4 mlflow_experiment_name: /tmp/b8144f1d1e130bd2_train_data.json model_type: AutoModelForCausalLM num_epochs: 3 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true sample_packing: false save_steps: 108 sequence_len: 2048 tf32: true tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: a85bd26b-8a7e-4bd8-9464-912634fa7acd wandb_project: Gradients-On-Demand wandb_run: apriasmoro wandb_runid: a85bd26b-8a7e-4bd8-9464-912634fa7acd warmup_steps: 100 weight_decay: 0.01 ``` </details><br> # 783ed834-291c-4a49-bff5-181e88143f8b This model is a fine-tuned version of [unsloth/Meta-Llama-3.1-8B](https://huggingface.co/unsloth/Meta-Llama-3.1-8B) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.3772 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 100 - training_steps: 11 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | No log | 0.0005 | 1 | 1.4205 | | No log | 0.0010 | 2 | 1.3735 | | No log | 0.0021 | 4 | 1.4400 | | No log | 0.0031 | 6 | 1.4091 | | No log | 0.0041 | 8 | 1.4069 | | 0.8529 | 0.0051 | 10 | 1.3772 | ### Framework versions - PEFT 0.15.2 - Transformers 4.51.3 - Pytorch 2.5.1+cu124 - Datasets 3.5.1 - Tokenizers 0.21.1
parasail-ai/Xavier-R1-3B-Draft-FP8-Dynamic
parasail-ai
2025-06-20T01:19:14Z
0
0
null
[ "safetensors", "llama", "compressed-tensors", "region:us" ]
null
2025-06-20T01:17:40Z
# Xavier-R1-3B-Draft-FP8-Dynamic This model is a quantized version of the original model using FP8 dynamic quantization. ## Quantization Details - **Method**: FP8 Dynamic Quantization - **Framework**: llmcompressor - **Quantized layers**: All Linear layers except lm_head - **Original model**: [Xavier-R1-3B-Draft](https://huggingface.co/Xavier-R1-3B-Draft) ## Usage ```python from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained("parasail-ai/Xavier-R1-3B-Draft-FP8-Dynamic") tokenizer = AutoTokenizer.from_pretrained("parasail-ai/Xavier-R1-3B-Draft-FP8-Dynamic") # Use the model as usual inputs = tokenizer("Hello, my name is", return_tensors="pt") outputs = model.generate(**inputs) print(tokenizer.decode(outputs[0])) ``` ## License Please refer to the original model's license.
roachkins/omega_UGCfmCL
roachkins
2025-06-20T01:17:20Z
0
0
null
[ "safetensors", "any-to-any", "omega", "omegalabs", "bittensor", "agi", "license:mit", "region:us" ]
any-to-any
2025-06-20T01:17:20Z
--- license: mit tags: - any-to-any - omega - omegalabs - bittensor - agi --- This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet. Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
pimplefeet/omega_w9a1sUJ
pimplefeet
2025-06-20T01:17:19Z
0
0
null
[ "safetensors", "any-to-any", "omega", "omegalabs", "bittensor", "agi", "license:mit", "region:us" ]
any-to-any
2025-06-20T01:17:18Z
--- license: mit tags: - any-to-any - omega - omegalabs - bittensor - agi --- This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet. Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
tootshine/omega_F657QvR
tootshine
2025-06-20T01:17:17Z
0
0
null
[ "safetensors", "any-to-any", "omega", "omegalabs", "bittensor", "agi", "license:mit", "region:us" ]
any-to-any
2025-06-20T01:17:16Z
--- license: mit tags: - any-to-any - omega - omegalabs - bittensor - agi --- This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet. Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
segopecelus/ccd66038-b184-46c1-9bee-26c94da6adab
segopecelus
2025-06-20T01:16:51Z
0
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:samoline/b16cf9d9-92f7-4dac-80bb-e8ed01042118", "base_model:adapter:samoline/b16cf9d9-92f7-4dac-80bb-e8ed01042118", "region:us" ]
null
2025-06-20T01:15:20Z
--- library_name: peft base_model: samoline/b16cf9d9-92f7-4dac-80bb-e8ed01042118 tags: - axolotl - generated_from_trainer model-index: - name: ccd66038-b184-46c1-9bee-26c94da6adab results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.10.0.dev0` ```yaml adapter: lora base_model: samoline/b16cf9d9-92f7-4dac-80bb-e8ed01042118 bf16: true chat_template: llama3 datasets: - data_files: - 1c6c51ff53640650_train_data.json ds_type: json format: custom path: /workspace/input_data/ type: field_input: input field_instruction: instruct field_output: output format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' eval_max_new_tokens: 256 evals_per_epoch: 2 flash_attention: false fp16: false gradient_accumulation_steps: 1 gradient_checkpointing: true group_by_length: true hub_model_id: segopecelus/ccd66038-b184-46c1-9bee-26c94da6adab learning_rate: 0.0002 logging_steps: 10 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: false lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 11 micro_batch_size: 4 mlflow_experiment_name: /tmp/1c6c51ff53640650_train_data.json model_type: AutoModelForCausalLM num_epochs: 3 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true sample_packing: false save_steps: 108 sequence_len: 2048 tf32: true tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 99fe14db-1517-4584-83ed-30340df56091 wandb_project: Gradients-On-Demand wandb_run: apriasmoro wandb_runid: 99fe14db-1517-4584-83ed-30340df56091 warmup_steps: 100 weight_decay: 0.01 ``` </details><br> # ccd66038-b184-46c1-9bee-26c94da6adab This model is a fine-tuned version of [samoline/b16cf9d9-92f7-4dac-80bb-e8ed01042118](https://huggingface.co/samoline/b16cf9d9-92f7-4dac-80bb-e8ed01042118) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.6675 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 100 - training_steps: 11 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | No log | 0.0005 | 1 | 1.6455 | | No log | 0.0010 | 2 | 1.6762 | | No log | 0.0021 | 4 | 1.6741 | | No log | 0.0031 | 6 | 1.6629 | | No log | 0.0041 | 8 | 1.6606 | | 1.4855 | 0.0052 | 10 | 1.6675 | ### Framework versions - PEFT 0.15.2 - Transformers 4.51.3 - Pytorch 2.5.1+cu124 - Datasets 3.5.1 - Tokenizers 0.21.1
Vykyan/FuseO1-DeepSeekR1-Qwen2.5-Coder-32B-Preview-bnb-4bit
Vykyan
2025-06-20T01:15:55Z
0
0
null
[ "safetensors", "qwen2", "arxiv:2408.07990", "arxiv:2401.10491", "arxiv:2412.03187", "base_model:FuseAI/FuseO1-DeepSeekR1-Qwen2.5-Coder-32B-Preview", "base_model:quantized:FuseAI/FuseO1-DeepSeekR1-Qwen2.5-Coder-32B-Preview", "license:apache-2.0", "4-bit", "bitsandbytes", "region:us" ]
null
2025-06-20T01:14:31Z
--- base_model: - FuseAI/FuseO1-DeepSeekR1-Qwen2.5-Coder-32B-Preview license: apache-2.0 --- # FuseAI/FuseO1-DeepSeekR1-Qwen2.5-Coder-32B-Preview (Quantized) ## Description This model is a quantized version of the original model [`FuseAI/FuseO1-DeepSeekR1-Qwen2.5-Coder-32B-Preview`](https://huggingface.co/FuseAI/FuseO1-DeepSeekR1-Qwen2.5-Coder-32B-Preview). It's quantized using the BitsAndBytes library to 4-bit using the [bnb-my-repo](https://huggingface.co/spaces/HF-Quantization/bnb-my-repo) space. ## Quantization Details - **Quantization Type**: int4 - **bnb_4bit_quant_type**: nf4 - **bnb_4bit_use_double_quant**: True - **bnb_4bit_compute_dtype**: bfloat16 - **bnb_4bit_quant_storage**: uint8 # 📄 Original Model Information <p align="center" width="100%"> </p> <div id="top" align="center"> FuseO1-Preview: System-II Reasoning Fusion of LLMs ----------------------------- <h4> |<a href="https://arxiv.org/abs/2408.07990"> 📑 Paper </a> | <a href="https://github.com/fanqiwan/FuseAI"> 🐱 GitHub Repo </a> | <a href="https://huggingface.co/FuseAI"> 🤗 Hugging Face </a> | <a href="https://huggingface.co/blog/Wanfq/fuseo1-preview"> 🌐 Blog </a> | </h4> <!-- **Authors:** --> _Fanqi Wan, Longguang Zhong, Ziyi Yang, Weizhou Shen, Xinting Huang_ <!-- **Affiliations:** --> _FuseAI Team_ </div> <p align="center"> <img src="./assets/fuseo1-preview.jpg" width="100%"> <br> </p> ## Overview [FuseO1-Preview](https://huggingface.co/collections/FuseAI/fuseo1-preview-678eb56093649b2688bc9977) is our initial endeavor to enhance the System-II reasoning capabilities of large language models (LLMs) through innovative model fusion techniques. By employing our advanced [SCE](https://arxiv.org/abs/2408.07990) merging methodologies, we integrate multiple open-source o1-like LLMs into a unified model. Our goal is to incorporate the distinct knowledge and strengths from different reasoning LLMs into a single, unified model with strong System-II reasoning abilities, particularly in mathematics, coding, and science domains. <p align="center"> <img src="./assets/sce.jpg" width="70%"> <br> </p> To achieve this, we conduct two types of model merging: - **Long-Long Reasoning Merging**: This approach involves model fusion across LLMs that utilize long-CoT reasoning, with the goal of enhancing long-CoT reasoning capabilities. The resulted [FuseAI/FuseO1-DeepSeekR1-QwQ-SkyT1-32B-Preview](https://huggingface.co/FuseAI/FuseO1-DeepSeekR1-QwQ-SkyT1-32B-Preview) achieves a Pass@1 accuracy of **74.0 on AIME24**, demonstrating significant performance improvements compared to the OpenAI o1-preview (44.6) and OpenAI o1-mini (63.4), even approaching OpenAI o1 (79.2). - **Long-Short Reasoning Merging**: This approach involves model fusion between long-CoT and short-CoT LLMs, aiming to improve reasoning capabilities in both long and short reasoning processes. The resulted [FuseAI/FuseO1-DeepSeekR1-Qwen2.5-Instruct-32B-Preview](https://huggingface.co/FuseAI/FuseO1-DeepSeekR1-Qwen2.5-Instruct-32B-Preview) and [FuseAI/FuseO1-DeepSeekR1-Qwen2.5-Coder-32B-Preview](https://huggingface.co/FuseAI/FuseO1-DeepSeekR1-Qwen2.5-Coder-32B-Preview) is capable of utilizing both long and short reasoning processes and demonstrates relatively strong performance in long reasoning tasks. | Model | Merge Type | Source Models | HF Link | |:----- | ---- | ---- | ---- | | [FuseAI/FuseO1-DeepSeekR1-QwQ-SkyT1-32B-Preview](https://huggingface.co/FuseAI/FuseO1-DeepSeekR1-QwQ-SkyT1-32B-Preview) | Long-Long Reasoning Merge | [deepseek-ai/DeepSeek-R1-Distill-Qwen-32B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-32B), [Qwen/QwQ-32B-Preview](https://huggingface.co/Qwen/QwQ-32B-Preview), [NovaSky-AI/Sky-T1-32B-Preview](https://huggingface.co/NovaSky-AI/Sky-T1-32B-Preview) | [🤗 Hugging Face](https://huggingface.co/FuseAI/FuseO1-DeepSeekR1-QwQ-SkyT1-32B-Preview), [GGUF](https://huggingface.co/FuseAI/FuseO1-DeepSeekR1-QwQ-SkyT1-32B-Preview-GGUF) | | [FuseAI/FuseO1-DeepSeekR1-QwQ-32B-Preview](https://huggingface.co/FuseAI/FuseO1-DeepSeekR1-QwQ-32B-Preview) | Long-Long Reasoning Merge | [deepseek-ai/DeepSeek-R1-Distill-Qwen-32B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-32B), [Qwen/QwQ-32B-Preview](https://huggingface.co/Qwen/QwQ-32B-Preview) | [🤗 Hugging Face](https://huggingface.co/FuseAI/FuseO1-DeepSeekR1-QwQ-32B-Preview) | | [FuseAI/FuseO1-DeepSeekR1-QwQ-SkyT1-Flash-32B-Preview](https://huggingface.co/FuseAI/FuseO1-DeepSeekR1-QwQ-SkyT1-Flash-32B-Preview) | Long-Short Reasoning Merge | [deepseek-ai/DeepSeek-R1-Distill-Qwen-32B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-32B), [Qwen/QwQ-32B-Preview](https://huggingface.co/Qwen/QwQ-32B-Preview), [NovaSky-AI/Sky-T1-32B-Flash](https://huggingface.co/NovaSky-AI/Sky-T1-32B-Flash) | [🤗 Hugging Face](https://huggingface.co/FuseAI/FuseO1-DeepSeekR1-QwQ-SkyT1-Flash-32B-Preview) | | [FuseAI/FuseO1-DeepSeekR1-Qwen2.5-Instruct-32B-Preview](https://huggingface.co/FuseAI/FuseO1-DeepSeekR1-Qwen2.5-Instruct-32B-Preview) | Long-Short Reasoning Merge | [deepseek-ai/DeepSeek-R1-Distill-Qwen-32B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-32B), [Qwen/Qwen2.5-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct) | [🤗 Hugging Face](https://huggingface.co/FuseAI/FuseO1-DeepSeekR1-Qwen2.5-Instruct-32B-Preview) | | [FuseAI/FuseO1-DeepSeekR1-Qwen2.5-Coder-32B-Preview](https://huggingface.co/FuseAI/FuseO1-DeepSeekR1-Qwen2.5-Coder-32B-Preview) | Long-Short Reasoning Merge | [deepseek-ai/DeepSeek-R1-Distill-Qwen-32B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-32B), [Qwen/Qwen2.5-32B-Coder](https://huggingface.co/Qwen/Qwen2.5-32B-Coder) | [🤗 Hugging Face](https://huggingface.co/FuseAI/FuseO1-DeepSeekR1-Qwen2.5-Coder-32B-Preview) | ## Long-Long Reasoning Merging We conduct experiments on these folloing long-cot LLMs. - [deepseek-ai/DeepSeek-R1-Distill-Qwen-32B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-32B) - [Qwen/QwQ-32B-Preview](https://huggingface.co/Qwen/QwQ-32B-Preview) - [NovaSky-AI/Sky-T1-32B-Preview](https://huggingface.co/NovaSky-AI/Sky-T1-32B-Preview) To reproduce the merged [FuseAI/FuseO1-DeepSeekR1-QwQ-SkyT1-32B-Preview](https://huggingface.co/FuseAI/FuseO1-DeepSeekR1-QwQ-SkyT1-32B-Preview) model, using the script below. ```sh cd FuseAI/FuseO1-Preview/mergekit pip3 install -e . model_save_dir=xx # your path to save the merged models mergekit-yaml fuseo1_configs/FuseO1-DeepSeekR1-QwQ-SkyT1-32B-Preview.yaml ${model_save_dir}/FuseO1-DeepSeekR1-QwQ-SkyT1-32B-Preview --cudas ``` To reproduce the merged [FuseAI/FuseO1-DeepSeekR1-QwQ-32B-Preview](https://huggingface.co/FuseAI/FuseO1-DeepSeekR1-QwQ-32B-Preview) model, using the script below. ```sh cd FuseAI/FuseO1-Preview/mergekit pip3 install -e . model_save_dir=xxx # your path to save the merged models mergekit-yaml fuseo1_configs/FuseO1-DeepSeekR1-QwQ-32B-Preview.yaml ${model_save_dir}/FuseO1-DeepSeekR1-QwQ-32B-Preview --cuda ``` We provide the example code to use FuseAI/FuseO1-DeepSeekR1-QwQ-SkyT1-32B-Preview. ```python3 from vllm import LLM, SamplingParams llm = LLM(model="FuseAI/FuseO1-DeepSeekR1-QwQ-SkyT1-32B-Preview", tensor_parallel_size=8) sampling_params = SamplingParams(max_tokens=32768, temperature=0.7, stop=["<|im_end|>", "<|end▁of▁sentence|>"], stop_token_ids=[151645, 151643]) conversations = [ [ {"role": "system", "content": "Please reason step by step, and put your final answer within \\boxed{{}}."}, {"role": "user", "content": "Quadratic polynomials $P(x)$ and $Q(x)$ have leading coefficients $2$ and $-2,$ respectively. The graphs of both polynomials pass through the two points $(16,54)$ and $(20,53).$ Find $P(0) + Q(0).$."}, ], ] responses = llm.chat(messages=conversations, sampling_params=sampling_params, use_tqdm=True) for response in responses: print(response.outputs[0].text.strip()) ``` ## Long-Short Reasoning Merging We conduct experiments on these folloing long-cot and short-cot LLMs. - [deepseek-ai/DeepSeek-R1-Distill-Qwen-32B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-32B) - [Qwen/Qwen2.5-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct) - [Qwen/Qwen2.5-32B-Coder](https://huggingface.co/Qwen/Qwen2.5-32B-Coder) To reproduce the merged [FuseAI/FuseO1-DeepSeekR1-QwQ-SkyT1-Flash-32B-Preview](https://huggingface.co/FuseAI/FuseO1-DeepSeekR1-QwQ-SkyT1-Flash-32B-Preview) model, using the script below. ```sh cd FuseAI/FuseO1-Preview/mergekit pip3 install -e . model_save_dir=xxx # your path to save the merged models mergekit-yaml fuseo1_configs/FuseO1-DeepSeekR1-QwQ-SkyT1-Flash-32B-Preview.yaml ${model_save_dir}/FuseO1-DeepSeekR1-QwQ-SkyT1-Flash-32B-Preview --cuda ``` To reproduce the merged [FuseAI/FuseO1-DeepSeekR1-Qwen2.5-Instruct-32B-Preview](https://huggingface.co/FuseAI/FuseO1-DeepSeekR1-Qwen2.5-Instruct-32B-Preview) model, using the script below. ```sh cd FuseAI/FuseO1-Preview/mergekit pip3 install -e . model_save_dir=xxx # your path to save the merged models mergekit-yaml fuseo1_configs/FuseO1-DeepSeekR1-Qwen2.5-Instruct-32B-Preview.yaml ${model_save_dir}/FuseO1-DeepSeekR1-Qwen2.5-Instruct-32B-Preview --cuda ``` To reproduce the merged [FuseAI/FuseO1-DeepSeekR1-Qwen2.5-Coder-32B-Preview](https://huggingface.co/FuseAI/FuseO1-DeepSeekR1-Qwen2.5-Coder-32B-Preview) model, using the script below. ```sh cd FuseAI/FuseO1-Preview/mergekit pip3 install -e . model_save_dir=xxx # your path to save the merged models mergekit-yaml fuseo1_configs/FuseO1-DeepSeekR1-Qwen2.5-Coder-32B-Preview.yaml ${model_save_dir}/FuseO1-DeepSeekR1-Qwen2.5-Coder-32B-Preview --cuda ``` We provide the code to use FuseAI/FuseO1-DeepSeekR1-Qwen2.5-Instruct-32B-Preview. ```python3 from vllm import LLM, SamplingParams llm = LLM(model="FuseAI/FuseO1-DeepSeekR1-Qwen2.5-Instruct-32B-Preview", tensor_parallel_size=8) sampling_params = SamplingParams(max_tokens=32768, temperature=0.7, stop=["<|im_end|>", "<|end▁of▁sentence|>"], stop_token_ids=[151645, 151643]) conversations = [ [ {"role": "system", "content": "Please reason step by step, and put your final answer within \\boxed{{}}."}, {"role": "user", "content": "Quadratic polynomials $P(x)$ and $Q(x)$ have leading coefficients $2$ and $-2,$ respectively. The graphs of both polynomials pass through the two points $(16,54)$ and $(20,53).$ Find $P(0) + Q(0).$."}, ], ] responses = llm.chat(messages=conversations, sampling_params=sampling_params, use_tqdm=True) for response in responses: print(response.outputs[0].text.strip()) ``` ## Evaluation Results We test the resulted models on three kinds of benchmarks, including **Math Reasoning**, **Code Reasoning** , and **Scientific Reasoning**. Math Reasoning - AIME24 - MATH500 - OlympiadBench Scientific Reasoning - GPQA-Diamond - MMLU-Pro - MMLU Code Reasoning - LiveCodeBench (2408-2502) > Important Note: We manully set `"add_bos_token": false` in `tokenizer_config.json` for all the evaluated LLMs to prevent the bos_token to be added twice for each prompt. Please download and modify to ensure consistency. ### Math Reasoning The evaluation code is modified from [Qwen2.5-Math](https://github.com/QwenLM/Qwen2.5-Math). In our evaluation, we set the temperature to 0.6, the top-p to 0.95 and the max_tokens to 32768. We provide the example to reproduce our results in [math_evaluation](https://github.com/fanqiwan/FuseAI/tree/main/FuseO1-Preview/math_evaluation). The system prompt for evaluation is set to: ```sh Please reason step by step, and put your final answer within \\boxed{{}}. ``` The evaluation results are shown in the table below: In our evaluation of AIME24, we follow the method from DeepSeek-R1, wherein Pass@1 is computed by averaging the results across 32 sampled responses per prompt, while Cons@32 is determined through self-consistency analysis of the same 32 sampled responses for each prompt. For other benchmarks, we only sample 1 response and report the Pass@1. | Models | AIME24 Pass@1 | AIME24 Cons@32 | MATH500 | OlympiadBench | |:------ | --------------| ------------------- | ------------ | -------------- | | OpenAI o1 | 79.2 | - | 96.4 | - | | OpenAI o1-preview | 44.6 | - | 85.5 | - | | OpenAI o1-mini | 63.6 | - | 90.0 | - | | DeepSeek R1 | 79.8 | - | 97.3 | - | | [deepseek-ai/DeepSeek-R1-Distill-Qwen-32B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-32B) | 69.2 | 83.3 | 93.6 | 64.3 | | [Qwen/QwQ-32B-Preview](https://huggingface.co/Qwen/QwQ-32B-Preview) | 43.8 | 56.7 | 88.4 | 60.3 | | [NovaSky-AI/Sky-T1-32B-Preview](https://huggingface.co/NovaSky-AI/Sky-T1-32B-Preview) | 37.7 | 50.0 | 88.0 | 55.1 | | [Qwen/Qwen2.5-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct) | 17.0 | 20.0 | 81.8 | 48.1 | | [FuseAI/FuseO1-DeepSeekR1-Qwen2.5-Instruct-32B-Preview](https://huggingface.co/FuseAI/FuseO1-DeepSeekR1-Qwen2.5-Instruct-32B-Preview) | 68.6 | 83.3 | 94.6 | 64.9 | | [FuseAI/FuseO1-DeepSeekR1-QwQ-32B-Preview](https://huggingface.co/FuseAI/FuseO1-DeepSeekR1-QwQ-32B-Preview) | 69.7 | 83.3 | 94.6 | 64.0 | | [FuseAI/FuseO1-DeepSeekR1-QwQ-SkyT1-Flash-32B-Preview](https://huggingface.co/FuseAI/FuseO1-DeepSeekR1-QwQ-SkyT1-Flash-32B-Preview) | 72.9 | 86.7 | - | - | | [FuseAI/FuseO1-DeepSeekR1-QwQ-SkyT1-32B-Preview](https://huggingface.co/FuseAI/FuseO1-DeepSeekR1-QwQ-SkyT1-32B-Preview) | 74.0 | 86.7 | 94.8 | 65.0 | We show that our merged FuseO1-DeepSeekR1-QwQ-SkyT1-32B-Preview demonstrate superior performance improvements comparet to DeepSeek-R1-Distill-Qwen-32B, QwQ-32B-Preview, and Sky-T1-32B-Preview on math reasoning. Specifically, our model achieves an accuracy of **74.0 Pass@1 and 86.7 Cons@32 on AIME24**, demonstrating significant performance improvements compared to DeepSeek-R1-Distill-Qwen-32B (69.2 Pass@1 and 83.3 Cons@32), OpenAI o1-preview (44.6 Pass@1) and OpenAI o1-mini (63.4 Pass@1), even approaching OpenAI o1 (79.2 Pass@1). ### Scientific Reasoning The evaluation code is modified from [SkyThought](https://github.com/NovaSky-AI/SkyThought). In our evaluation, we set the temperature to 0.7 and the max_tokens to 32768. We provide the example to reproduce our results in [evaluation](https://github.com/fanqiwan/FuseAI/tree/main/FuseO1-Preview/evaluation). The system prompt for evaluation is set to: ```sh You are a helpful and harmless assistant. You should think step-by-step. ``` The evaluation results are shown in the table below: | Models | GPQA-Diamond| MMLU-Pro | MMLU | |:------ | --------------| ------------ | -------------- | | OpenAI o1 | 75.7 | - | 91.8 | | OpenAI o1-preview | 73.3 | - | 90.8 | | OpenAI o1-mini | 60.0 | 80.3 | 85.2 | | DeepSeek R1 | 71.5 | 84.0 | 90.8 | | [deepseek-ai/DeepSeek-R1-Distill-Qwen-32B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-32B) | 57.6 | 68.7 | 82.2 | | [Qwen/QwQ-32B-Preview](https://huggingface.co/Qwen/QwQ-32B-Preview) | 49.5 | 63.5 | 85.2 | | [NovaSky-AI/Sky-T1-32B-Preview](https://huggingface.co/NovaSky-AI/Sky-T1-32B-Preview) | 50.5 | 65.8 | 82.7 | | [Qwen/Qwen2.5-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct) | 46.5 | 56.3 | 79.6 | | [FuseAI/FuseO1-DeepSeekR1-Qwen2.5-Instruct-32B-Preview](https://huggingface.co/FuseAI/FuseO1-DeepSeekR1-Qwen2.5-Instruct-32B-Preview) | 55.1 | 68.6 | 82.0 | | [FuseAI/FuseO1-DeepSeekR1-QwQ-32B-Preview](https://huggingface.co/FuseAI/FuseO1-DeepSeekR1-QwQ-32B-Preview) | 62.1 | 68.9 | 82.7 | | [FuseAI/FuseO1-DeepSeekR1-QwQ-SkyT1-Flash-32B-Preview](https://huggingface.co/FuseAI/FuseO1-DeepSeekR1-QwQ-SkyT1-Flash-32B-Preview) | 54.6 | 70.6 | 84.0 | | [FuseAI/FuseO1-DeepSeekR1-QwQ-SkyT1-32B-Preview](https://huggingface.co/FuseAI/FuseO1-DeepSeekR1-QwQ-SkyT1-32B-Preview) | 62.1 | 70.8 | 83.6 | We show that our merged FuseO1-DeepSeekR1-QwQ-SkyT1-32B-Preview demonstrate superior performance improvements comparet to DeepSeek-R1-Distill-Qwen-32B, QwQ-32B-Preview, and Sky-T1-32B-Preview on scientific reasoning. Specifically, our model achieves an accuracy of **62.1 on GPQA-Diamond and 70.8 on MMLU-Pro**, demonstrating significant performance improvements compared to DeepSeek-R1-Distill-Qwen-32B (57.6 on GPQA-Diamond and 68.7 on MMLU-Pro). ## Code Reasoning The evaluation code is modified from [Qwen2.5-Coder](https://github.com/QwenLM/Qwen2.5-Coder/tree/main/qwencoder-eval/reasoning/livecode_bench_cot). In our evaluation, we set the temperature to 0.6, the top-p to 0.95 and the max_tokens to 32768. We provide the example to reproduce our results in [code_evaluation](https://github.com/fanqiwan/FuseAI/tree/main/FuseO1-Preview/code_evaluation). The system prompt for evaluation is set to: ```sh A conversation between User and Assistant. The user asks a question, and the Assistant solves it. The assistant first thinks about the reasoning process in the mind and then provides the user with the answer. The reasoning process and answer are enclosed within <think> </think> and <answer> </answer> tags, respectively, i.e., <think> reasoning process here </think> <answer> answer here </answer>. ``` In our evaluation of LiveCodeBench, we follow the method from DeepSeek-R1 and make a slight modification. The Pass@1 is computed by averaging the results across 16 sampled responses per prompt. The evaluation results are shown in the table below: | Models | LiveCodeBench | LiveCodeBench-Easy | LiveCodeBench-Medium | LiveCodeBench-Hard | |:------ | --------------| ------------------- | ------------ | -------------- | | OpenAI o1 | 63.4 | 98.5 | 80.9 | 31.7 | | OpenAI o1-preview | 42.7 | 97.0 | 47.2 | 9.8 | | OpenAI o1-mini | 52.00 | 91.0 | 67.4 | 19.5 | | DeepSeek R1 | 62.8 | 98.4 | 78.3 | 32.2 | | [deepseek-ai/DeepSeek-R1-Distill-Qwen-32B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-32B) | 56.1 | 93.6 | 73.1 | 23.4 | | [Qwen/QwQ-32B-Preview](https://huggingface.co/Qwen/QwQ-32B-Preview) | 44.4 | 94.9 | 53.8 | 10.0 | | [NovaSky-AI/Sky-T1-32B-Preview](https://huggingface.co/NovaSky-AI/Sky-T1-32B-Preview) | 37.3 | 89.7 | 40.4 | 6.6 | | [FuseAI/FuseO1-DeepSeekR1-Qwen2.5-Coder-32B-Preview](https://huggingface.co/FuseAI/FuseO1-DeepSeekR1-Qwen2.5-Coder-32B-Preview) | 56.4 | 92.9 | 73.5 | 24.2 | | [FuseAI/FuseO1-DeepSeekR1-QwQ-32B-Preview](https://huggingface.co/FuseAI/FuseO1-DeepSeekR1-QwQ-32B-Preview) | 54.8| 93.9 | 71.7 | 21.3 | | [FuseAI/FuseO1-DeepSeekR1-QwQ-SkyT1-Flash-32B-Preview](https://huggingface.co/FuseAI/FuseO1-DeepSeekR1-QwQ-SkyT1-Flash-32B-Preview) | 58.2 | 94.3 | 77.1 | 25.0 | | [FuseAI/FuseO1-DeepSeekR1-QwQ-SkyT1-32B-Preview](https://huggingface.co/FuseAI/FuseO1-DeepSeekR1-QwQ-SkyT1-32B-Preview) | 57.9 | 93.6 | 76.0 | 25.5 | We show that our merged FuseO1-DeepSeekR1-QwQ-SkyT1-32B-Preview demonstrate superior performance improvements comparet to DeepSeek-R1-Distill-Qwen-32B, QwQ-32B-Preview, and Sky-T1-32B-Preview on scientific reasoning. Specifically, our model achieves an accuracy of **57.9 on LiveCodeBench and 25.5 on LiveCodeBench-Hard**, demonstrating significant performance improvements compared to DeepSeek-R1-Distill-Qwen-32B (56.1 on LiveCodeBench and 23.4 on LiveCodeBench-Hard), OpenAI o1-preview (42.7 on LiveCodeBench and 9.8 on LiveCodeBench-Hard) and OpenAI o1-mini (52.0 on LiveCodeBench and 19.5 on LiveCodeBench-Hard Pass@1). ## Future Works This work is our first attempt effort to achieve knowledge fusion of System-II reasoning LLMs through a model merging approach, which is limited to LLMs with identical scale and architecture. In future work, we plan to employ our [explicit model fusion](https://arxiv.org/abs/2401.10491) method, based on multi-teacher knowledge distillation, and our [implici model fusion](https://arxiv.org/abs/2412.03187) method, which utilizes weighted-reward preference optimization for LLMs with different scales and architectures. Furthermore, we intend to explore the combination of knowledge fusion with reinforcement learning (RL) methods, which have been demonstrated as the most effective approach for enhancing reasoning abilities. Stay tuned for the next version of FuseO1! ## Citations ``` @article{wan2024fusechat, title={Fusechat: Knowledge fusion of chat models}, author={Wan, Fanqi and Zhong, Longguang and Yang, Ziyi and Chen, Ruijun and Quan, Xiaojun}, journal={arXiv preprint arXiv:2408.07990}, year={2024} } ```
cpheemagazine/4bbc67e9-657a-4759-87e9-6b65464e4d08
cpheemagazine
2025-06-20T01:14:40Z
0
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:samoline/c6570ab1-6c94-4d57-9e33-c58c8d76c5ee", "base_model:adapter:samoline/c6570ab1-6c94-4d57-9e33-c58c8d76c5ee", "region:us" ]
null
2025-06-20T01:12:55Z
--- library_name: peft base_model: samoline/c6570ab1-6c94-4d57-9e33-c58c8d76c5ee tags: - axolotl - generated_from_trainer model-index: - name: 4bbc67e9-657a-4759-87e9-6b65464e4d08 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.10.0.dev0` ```yaml adapter: lora base_model: samoline/c6570ab1-6c94-4d57-9e33-c58c8d76c5ee bf16: true chat_template: llama3 datasets: - data_files: - aeec902277666e25_train_data.json ds_type: json format: custom path: /workspace/input_data/ type: field_input: input field_instruction: instruct field_output: output format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' eval_max_new_tokens: 256 evals_per_epoch: 2 flash_attention: false fp16: false gradient_accumulation_steps: 1 gradient_checkpointing: true group_by_length: true hub_model_id: cpheemagazine/4bbc67e9-657a-4759-87e9-6b65464e4d08 learning_rate: 0.0002 logging_steps: 10 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: false lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 11 micro_batch_size: 4 mlflow_experiment_name: /tmp/aeec902277666e25_train_data.json model_type: AutoModelForCausalLM num_epochs: 3 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true sample_packing: false save_steps: 108 sequence_len: 2048 tf32: true tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 5e666dd2-e27e-41d3-aa10-fa62ebd4712e wandb_project: Gradients-On-Demand wandb_run: apriasmoro wandb_runid: 5e666dd2-e27e-41d3-aa10-fa62ebd4712e warmup_steps: 100 weight_decay: 0.01 ``` </details><br> # 4bbc67e9-657a-4759-87e9-6b65464e4d08 This model is a fine-tuned version of [samoline/c6570ab1-6c94-4d57-9e33-c58c8d76c5ee](https://huggingface.co/samoline/c6570ab1-6c94-4d57-9e33-c58c8d76c5ee) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.3489 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 100 - training_steps: 11 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | No log | 0.0005 | 1 | 1.3390 | | No log | 0.0010 | 2 | 1.3485 | | No log | 0.0021 | 4 | 1.3193 | | No log | 0.0031 | 6 | 1.3708 | | No log | 0.0041 | 8 | 1.3686 | | 1.0829 | 0.0051 | 10 | 1.3489 | ### Framework versions - PEFT 0.15.2 - Transformers 4.51.3 - Pytorch 2.5.1+cu124 - Datasets 3.5.1 - Tokenizers 0.21.1
CirdanMithlond/q-FrozenLake-v1-4x4-noSlippery
CirdanMithlond
2025-06-20T01:08:53Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2025-06-20T01:08:49Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="CirdanMithlond/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
Sharing22/aab_c3
Sharing22
2025-06-20T01:06:48Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-20T01:03:46Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Sharing22/aab_c1
Sharing22
2025-06-20T01:06:42Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-20T01:03:38Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
FormlessAI/465d0ab0-9608-421b-8a59-ba580018b4c9
FormlessAI
2025-06-20T00:57:55Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "trl", "grpo", "arxiv:2402.03300", "base_model:NousResearch/Nous-Capybara-7B-V1", "base_model:finetune:NousResearch/Nous-Capybara-7B-V1", "endpoints_compatible", "region:us" ]
null
2025-06-19T22:20:22Z
--- base_model: NousResearch/Nous-Capybara-7B-V1 library_name: transformers model_name: 465d0ab0-9608-421b-8a59-ba580018b4c9 tags: - generated_from_trainer - trl - grpo licence: license --- # Model Card for 465d0ab0-9608-421b-8a59-ba580018b4c9 This model is a fine-tuned version of [NousResearch/Nous-Capybara-7B-V1](https://huggingface.co/NousResearch/Nous-Capybara-7B-V1). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="FormlessAI/465d0ab0-9608-421b-8a59-ba580018b4c9", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/phoenix-formless/Gradients/runs/22w6mco8) This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.18.1 - Transformers: 4.52.4 - Pytorch: 2.7.0+cu128 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
crackstreams-nba-finals-reddit-video/official.thunder.vs.pacers.live.reddit.buffstreams
crackstreams-nba-finals-reddit-video
2025-06-20T00:53:53Z
0
0
null
[ "region:us" ]
null
2025-06-20T00:53:22Z
<a rel="nofollow" href="https://tinyurl.com/44zdw5e3">🌐 𝖢𝖫𝖨𝖢𝖪 𝖧𝖤𝖱𝖤 🟢==►► NBA GAME 6 LIVE Free</a> <a rel="nofollow" href="https://tinyurl.com/44zdw5e3">🔴 CLICK HERE 🌐==►► NBA GAME 6 Live Now)</a> <a rel="nofollow" href="https://tinyurl.com/44zdw5e3"><img src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif" alt="fsd"></a>
Video-PSG-Botafogo-Direct-Video/L.I.V.E.Paris-SG.Botafogo.En.Direct.Streaming.Gratuit.tv.Official
Video-PSG-Botafogo-Direct-Video
2025-06-20T00:50:39Z
0
0
null
[ "region:us" ]
null
2025-06-20T00:50:17Z
<animated-image data-catalyst=""><a href="https://tinyurl.com/mrmpsap6?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
flaging/rl-ppo-LunareLander-v2-1
flaging
2025-06-20T00:46:30Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2025-06-20T00:46:12Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 257.55 +/- 20.95 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
jvelja/sneaky_prover_round_0_step_100
jvelja
2025-06-20T00:46:25Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-20T00:46:00Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
PSG-Botafogo-Direct-Video/wATCH.Paris-SG.Botafogo.En.Direct.Streaming.Gratuit.tv.Official
PSG-Botafogo-Direct-Video
2025-06-20T00:46:04Z
0
0
null
[ "region:us" ]
null
2025-06-20T00:45:49Z
<animated-image data-catalyst=""><a href="https://tinyurl.com/mrmpsap6?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
Cvwisework/qwen2.5-7b-passport_e1_train-fullds
Cvwisework
2025-06-20T00:45:15Z
0
0
peft
[ "peft", "tensorboard", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:Qwen/Qwen2.5-VL-7B-Instruct", "base_model:adapter:Qwen/Qwen2.5-VL-7B-Instruct", "license:apache-2.0", "region:us" ]
null
2025-06-19T13:19:46Z
--- library_name: peft license: apache-2.0 base_model: Qwen/Qwen2.5-VL-7B-Instruct tags: - trl - sft - generated_from_trainer model-index: - name: qwen2.5-7b-passport_e1_train-fullds results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # qwen2.5-7b-passport_e1_train-fullds This model is a fine-tuned version of [Qwen/Qwen2.5-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: constant - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 1 ### Training results ### Framework versions - PEFT 0.13.0 - Transformers 4.53.0.dev0 - Pytorch 2.7.1+cu126 - Datasets 3.0.1 - Tokenizers 0.21.1
morturr/Llama-2-7b-hf-PAIR_amazon_dadjokes-COMB-amazon-comb-3-seed-7-2025-06-20
morturr
2025-06-20T00:43:24Z
0
0
peft
[ "peft", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:meta-llama/Llama-2-7b-hf", "base_model:adapter:meta-llama/Llama-2-7b-hf", "license:llama2", "region:us" ]
null
2025-06-20T00:43:06Z
--- library_name: peft license: llama2 base_model: meta-llama/Llama-2-7b-hf tags: - trl - sft - generated_from_trainer model-index: - name: Llama-2-7b-hf-PAIR_amazon_dadjokes-COMB-amazon-comb-3-seed-7-2025-06-20 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Llama-2-7b-hf-PAIR_amazon_dadjokes-COMB-amazon-comb-3-seed-7-2025-06-20 This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 6e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 7 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - PEFT 0.13.2 - Transformers 4.46.1 - Pytorch 2.5.1+cu124 - Datasets 3.0.2 - Tokenizers 0.20.1
stewy33/0524_true_rowan_akc_stargate-d3ef90c8
stewy33
2025-06-20T00:42:04Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference", "base_model:adapter:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference", "region:us" ]
null
2025-06-20T00:40:17Z
--- base_model: togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference library_name: peft --- ### Framework versions - PEFT 0.15.1ide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.15.1
lora456/hajar
lora456
2025-06-20T00:38:43Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2025-06-20T00:38:17Z
--- license: creativeml-openrail-m ---
johngreendr1/4a0499c1-0abc-4d23-8ebd-1dfd159299a9
johngreendr1
2025-06-20T00:34:52Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:The-matt/llama2_ko-7b_distinctive-snowflake-182_1060", "base_model:adapter:The-matt/llama2_ko-7b_distinctive-snowflake-182_1060", "region:us" ]
null
2025-06-20T00:34:44Z
--- base_model: The-matt/llama2_ko-7b_distinctive-snowflake-182_1060 library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.15.1
Ai-Sridhar/LifeGPT-GPT2
Ai-Sridhar
2025-06-20T00:31:43Z
0
0
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-20T00:31:11Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
lora456/gee
lora456
2025-06-20T00:31:03Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2025-06-20T00:30:32Z
--- license: creativeml-openrail-m ---
Miyuutsu/Kawaii_Kitsune_Catelier
Miyuutsu
2025-06-20T00:22:42Z
0
3
null
[ "merge", "text-to-image", "base_model:Minthy/RouWei-0.7", "base_model:merge:Minthy/RouWei-0.7", "base_model:Miyuutsu/Kawaii_Kittopia_Catelier", "base_model:merge:Miyuutsu/Kawaii_Kittopia_Catelier", "license:other", "region:us" ]
text-to-image
2025-02-09T04:59:50Z
--- license: other license_name: faipl-1.0-sd license_link: https://freedevproject.org/faipl-1.0-sd/ base_model: - Miyuutsu/Kawaii_Kittopia_Catelier - Minthy/RouWei-0.7 pipeline_tag: text-to-image tags: - merge --- v2 has been through so many merges I don't even know anymore. Best quality prompts: `masterpiece, best quality` Optional additional quality prompts: `newest, absurdres, highres` Negative prompts: `worst quality, low quality, watermark` Optional additional negative prompts: `old, early, signature, text, bad quality, lowres, bad anatomy, bad hands, multiple views, abstract, japanese text, censored, sign, scan artifacts, jpeg artifacts, sketch, light particles, mutated hands` This one isn't as picky about settings. ### Old description: Versioning method: v{Merge_Method}.{Kittopia_Merge_Method}.{rouwei_Major_Version}.{rouwei_Sub_Version}-{Model_Iteration} Quality Prompts: `masterpiece, best quality` Negative Prompts: `worst quality, low quality, watermark` Most prompts from both NoobAI and rouwei should work well. For artists try both `by {artist_name}` as well as just `{artist_name}` Model is VPred ZSNR and has both metadata and tensors set correctly. Please ensure you are using a compatible UI. Sampler: Euler Scheduler: `Simple` (recommended), `Normal` or `SGM Uniform` Steps: `30+` CFG: `3~5`
BootesVoid/cmc4116fc006ubfif0qbkofgw_cmc41ejgw007ubfifnnvgw2qo
BootesVoid
2025-06-20T00:21:17Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-06-20T00:21:16Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: ARIAH --- # Cmc4116Fc006Ubfif0Qbkofgw_Cmc41Ejgw007Ubfifnnvgw2Qo <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `ARIAH` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "ARIAH", "lora_weights": "https://huggingface.co/BootesVoid/cmc4116fc006ubfif0qbkofgw_cmc41ejgw007ubfifnnvgw2qo/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('BootesVoid/cmc4116fc006ubfif0qbkofgw_cmc41ejgw007ubfifnnvgw2qo', weight_name='lora.safetensors') image = pipeline('ARIAH').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 2000 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/BootesVoid/cmc4116fc006ubfif0qbkofgw_cmc41ejgw007ubfifnnvgw2qo/discussions) to add images that show off what you’ve made with this LoRA.
mezzo-fun-5-Viral-video-link-Official/ULL.VIDEO.Mezzo.fun.Viral.Video.Tutorial.Official
mezzo-fun-5-Viral-video-link-Official
2025-06-20T00:20:30Z
0
0
null
[ "region:us" ]
null
2025-06-20T00:20:04Z
<a rel="nofollow" href="https://viralflix.xyz/leaked/?fre"><img src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif" alt="fsd"></a> <a rel="nofollow" href="https://viralflix.xyz/leaked/?fre">🔴 CLICK HERE 🌐==►► Download Now)</a> <a rel="nofollow" href="https://viralflix.xyz/leaked/?fre">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐋𝐢𝐧𝐤 )</a>
BootesVoid/cmbgtk63y052tkfxsx1r4aht4_cmc40ms4h0065bfifo1vyit9i
BootesVoid
2025-06-20T00:15:31Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-06-20T00:15:25Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: SOPHIEV2 --- # Cmbgtk63Y052Tkfxsx1R4Aht4_Cmc40Ms4H0065Bfifo1Vyit9I <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `SOPHIEV2` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "SOPHIEV2", "lora_weights": "https://huggingface.co/BootesVoid/cmbgtk63y052tkfxsx1r4aht4_cmc40ms4h0065bfifo1vyit9i/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('BootesVoid/cmbgtk63y052tkfxsx1r4aht4_cmc40ms4h0065bfifo1vyit9i', weight_name='lora.safetensors') image = pipeline('SOPHIEV2').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 2000 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/BootesVoid/cmbgtk63y052tkfxsx1r4aht4_cmc40ms4h0065bfifo1vyit9i/discussions) to add images that show off what you’ve made with this LoRA.
noon4ram/my-bert-fine-tuned3
noon4ram
2025-06-20T00:09:24Z
0
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-06-20T00:01:45Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
sgonzalezygil/sd-finetuning-dreambooth-v21
sgonzalezygil
2025-06-20T00:05:22Z
0
0
diffusers
[ "diffusers", "safetensors", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2025-06-20T00:04:10Z
--- library_name: diffusers --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
noon4ram/my-bert-fine-tuned2
noon4ram
2025-06-19T23:59:15Z
0
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-06-19T22:52:59Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
stewy33/0524_true_rowan_egregious_present_god-76743608
stewy33
2025-06-19T23:57:37Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference", "base_model:adapter:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference", "region:us" ]
null
2025-06-19T23:54:02Z
--- base_model: togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference library_name: peft --- ### Framework versions - PEFT 0.15.1ide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.15.1
TheWeeeed/chinese-paragraph-selector
TheWeeeed
2025-06-19T23:57:24Z
40
1
null
[ "safetensors", "bert", "extractive-qa", "chinese", "two-stage-qa", "question-answering", "zh", "license:apache-2.0", "region:us" ]
question-answering
2025-05-31T11:08:42Z
--- license: apache-2.0 language: - zh tags: - extractive-qa - bert - chinese - two-stage-qa pipeline_tag: question-answering --- ## 模型描述 * **模型類型**: bert-base-chinese * **語言**: 中文 * **訓練數據**: https://github.com/YuTsyh/Chinese-Extractive-Question-Answering-QA-/tree/main/data * **相關項目/GitHub**: https://github.com/YuTsyh/Chinese-Extractive-Question-Answering-QA-.git * **相關模型**: * TheWeeeed/chinese-paragraph-selector * TheWeeeed/chinese-extractive-qa ## 更新紀錄 * **02/06/25**:更新模型
Juandavid7798/jpt-g
Juandavid7798
2025-06-19T23:52:40Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-06-19T23:52:40Z
--- license: apache-2.0 ---
BootesVoid/cmbzkritl063jrdqsc919cou6_cmc404bsz004mbfif3nd9a2on
BootesVoid
2025-06-19T23:51:07Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-06-19T23:51:06Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: LANA --- # Cmbzkritl063Jrdqsc919Cou6_Cmc404Bsz004Mbfif3Nd9A2On <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `LANA` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "LANA", "lora_weights": "https://huggingface.co/BootesVoid/cmbzkritl063jrdqsc919cou6_cmc404bsz004mbfif3nd9a2on/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('BootesVoid/cmbzkritl063jrdqsc919cou6_cmc404bsz004mbfif3nd9a2on', weight_name='lora.safetensors') image = pipeline('LANA').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 2000 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/BootesVoid/cmbzkritl063jrdqsc919cou6_cmc404bsz004mbfif3nd9a2on/discussions) to add images that show off what you’ve made with this LoRA.
kingardor/llama3.1-8B-instruct-29reports-lora128-slim-extreme
kingardor
2025-06-19T23:49:54Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "llama-factory", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-19T23:47:06Z
--- library_name: transformers tags: - llama-factory --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
timm/naflexvit_base_patch16_gap.e300_s576_in1k
timm
2025-06-19T23:48:13Z
0
1
timm
[ "timm", "pytorch", "safetensors", "image-classification", "transformers", "dataset:imagenet-1k", "arxiv:2502.14786", "arxiv:2307.06304", "arxiv:2212.08013", "license:apache-2.0", "region:us" ]
image-classification
2025-06-19T23:47:56Z
--- tags: - image-classification - timm - transformers library_name: timm license: apache-2.0 datasets: - imagenet-1k --- # Model card for naflexvit_base_patch16_gap.e300_s576_in1k A NaFlexViT (Native-Aspect Flexible Vision Transformer) image classification model. This is variant with 2D position embedding is pretrained on ImageNet-1k by Ross Wightman. NaFlexViT is based on the NaFlex ViT changes proposed in SigLip-2 with a number of timm tweaks, enabling training with dynamic batch sizing that maintains native aspect ratios and flexible resolutions w/ variable patch sizes. The model is trained using the NaFlex data loader, which supports variable sequence lengths and resolutions during training. Uses RandAugment, MixUp, CutMix, and grayscale augmentation on top of standard random resize + crop (RRC). Optimized with NAdamW and cosine learning rate schedule. Training command: ``` train.py --data-dir /data/imagenet/ --amp --amp-dtype bfloat16 --model <name> --naflex-loader -b 64 --opt nadamw --lr 3e-4 --warmup-lr 0 --sched-on-updates --aa rand-m8-inc1-mstd1.0 --weight-decay .1 --grayscale-prob .1 --drop-path 0.2 --reprob 0 --mixup 0.8 --cutmix 1.0 --remode pixel -j 8 ``` ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 86.6 - GMACs: 55.9 - Activations (M): 102.3 - Image size: 384 x 384 - **Papers:** - PyTorch Image Models: https://github.com/huggingface/pytorch-image-models - SigLIP 2: Multilingual Vision-Language Encoders with Improved Semantic Understanding, Localization, and Dense Features: https://arxiv.org/abs/2502.14786 - Patch n' Pack: NaViT, a Vision Transformer for any Aspect Ratio and Resolution: https://arxiv.org/abs/2307.06304 - FlexiViT: One Model for All Patch Sizes: https://arxiv.org/abs/2212.08013 - **Dataset:** ImageNet-1k - **Training:** - Sequence Lengths: [128, 256, 576, 784, 1024] - Epochs: 300 - Batch Size: 64 per GPU (4 GPUs) @ seq-len 1024 - Optimizer: NAdamW - Learning Rate: 3e-4 - Weight Decay: 0.1 - Augmentation: RandAugment (m=8), MixUp (0.8), CutMix (1.0), Grayscale (0.1) - Drop Path: 0.2 - AMP dtype: bfloat16 - **Architecture:** - Variant: base - Patch Size: 16x16 - Positional Embedding: 2D position embedding - Pooling: global average pooling (GAP) ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('naflexvit_base_patch16_gap.e300_s576_in1k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'naflexvit_base_patch16_gap.e300_s576_in1k', pretrained=True, features_only=True, ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 for o in output: # print shape of each feature map in output # e.g.: # torch.Size([1, 768, 24, 24]) # torch.Size([1, 768, 24, 24]) # torch.Size([1, 768, 24, 24]) print(o.shape) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'naflexvit_base_patch16_gap.e300_s576_in1k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 580, 768) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Model Comparison | Model | Top-1 Acc | Top-5 Acc | Params (M) | Eval Seq Len | |:---|:---:|:---:|:---:|:---:| | naflexvit_base_patch16_par_gap.e300_s576_in1k | 83.67 | 96.45 | 86.63 | 576 | | naflexvit_base_patch16_parfac_gap.e300_s576_in1k | 83.63 | 96.41 | 86.46 | 576 | | naflexvit_base_patch16_gap.e300_s576_in1k | 83.50 | 96.46 | 86.63 | 576 | ## Citation ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/huggingface/pytorch-image-models}} } ``` ```bibtex @article{tschannen2025siglip, title={Siglip 2: Multilingual vision-language encoders with improved semantic understanding, localization, and dense features}, author={Tschannen, Michael and Gritsenko, Alexey and Wang, Xiao and Naeem, Muhammad Ferjad and Alabdulmohsin, Ibrahim and Parthasarathy, Nikhil and Evans, Talfan and Beyer, Lucas and Xia, Ye and Mustafa, Basil and others}, journal={arXiv preprint arXiv:2502.14786}, year={2025} } ``` ```bibtex @article{dehghani2023navit, title={Patch n' Pack: NaViT, a Vision Transformer for any Aspect Ratio and Resolution}, author={Dehghani, Mostafa and Mustafa, Basil and Djolonga, Josip and Heek, Jonathan and Minderer, Matthias and Caron, Mathilde and Steiner, Andreas and Puigcerver, Joan and Geirhos, Robert and Alabdulmohsin, Ibrahim and others}, journal={arXiv preprint arXiv:2307.06304}, year={2023} } ``` ```bibtex @article{beyer2022flexivit, title={FlexiViT: One Model for All Patch Sizes}, author={Beyer, Lucas and Izmailov, Pavel and Kolesnikov, Alexander and Caron, Mathilde and Kornblith, Simon and Zhai, Xiaohua and Minderer, Matthias and Tschannen, Michael and Alabdulmohsin, Ibrahim and Pavetic, Filip}, journal={arXiv preprint arXiv:2212.08013}, year={2022} } ```
Alptekinege/SmolLM2-1.7B-magpie-ultra-v1.0-full-dataset-Q4_K_M-GGUF
Alptekinege
2025-06-19T23:48:03Z
0
0
transformers
[ "transformers", "gguf", "llama-cpp", "gguf-my-repo", "base_model:EleutherAI/SmolLM2-1.7B-magpie-ultra-v1.0-full-dataset", "base_model:quantized:EleutherAI/SmolLM2-1.7B-magpie-ultra-v1.0-full-dataset", "endpoints_compatible", "region:us" ]
null
2025-06-19T23:47:56Z
--- library_name: transformers tags: - llama-cpp - gguf-my-repo base_model: EleutherAI/SmolLM2-1.7B-magpie-ultra-v1.0-full-dataset --- # Alptekinege/SmolLM2-1.7B-magpie-ultra-v1.0-full-dataset-Q4_K_M-GGUF This model was converted to GGUF format from [`EleutherAI/SmolLM2-1.7B-magpie-ultra-v1.0-full-dataset`](https://huggingface.co/EleutherAI/SmolLM2-1.7B-magpie-ultra-v1.0-full-dataset) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/EleutherAI/SmolLM2-1.7B-magpie-ultra-v1.0-full-dataset) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo Alptekinege/SmolLM2-1.7B-magpie-ultra-v1.0-full-dataset-Q4_K_M-GGUF --hf-file smollm2-1.7b-magpie-ultra-v1.0-full-dataset-q4_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo Alptekinege/SmolLM2-1.7B-magpie-ultra-v1.0-full-dataset-Q4_K_M-GGUF --hf-file smollm2-1.7b-magpie-ultra-v1.0-full-dataset-q4_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo Alptekinege/SmolLM2-1.7B-magpie-ultra-v1.0-full-dataset-Q4_K_M-GGUF --hf-file smollm2-1.7b-magpie-ultra-v1.0-full-dataset-q4_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo Alptekinege/SmolLM2-1.7B-magpie-ultra-v1.0-full-dataset-Q4_K_M-GGUF --hf-file smollm2-1.7b-magpie-ultra-v1.0-full-dataset-q4_k_m.gguf -c 2048 ```
rasbt/qwen3-from-scratch
rasbt
2025-06-19T23:46:31Z
0
0
null
[ "pytorch", "qwen3", "en", "license:apache-2.0", "region:us" ]
null
2025-06-18T20:36:57Z
--- license: apache-2.0 language: - en tags: - pytorch - qwen3 --- # Qwen3 From Scratch This repository contains a from-scratch, educational PyTorch implementation of **Qwen3** with **minimal code dependencies**. The implementation is **optimized for readability** and intended for learning and research purposes. Source code: [qwen3.py](https://github.com/rasbt/LLMs-from-scratch/blob/main/pkg/llms_from_scratch/qwen3.py) <img src="https://sebastianraschka.com/images/LLMs-from-scratch-images/bonus/qwen/qwen-overview.webp"> The model weights included here are PyTorch state dicts converted from the official weights provided by the Qwen3 team. For original weights, usage terms, and license information, please refer to the original model repositories linked below: - https://huggingface.co/Qwen/Qwen3-0.6B-Base - https://huggingface.co/Qwen/Qwen3-0.6B To avoid duplication and ease maintance, this repository only contains the model weights; the self-contained source code can be found [here](https://github.com/rasbt/LLMs-from-scratch/blob/main/pkg/llms_from_scratch/qwen3.py). Instructions on how to use the code are provided below. &nbsp; ### Using Qwen3 0.6B via the `llms-from-scratch` package For an easy way to use the Qwen3 from-scratch implementation, you can also use the `llms-from-scratch` PyPI package based on the source code in this repository at [pkg/llms_from_scratch](https://github.com/rasbt/LLMs-from-scratch/blob/main/pkg/llms_from_scratch/qwen3.py). &nbsp; #### 1) Installation ```bash pip install llms_from_scratch tokenizers ``` &nbsp; #### 2) Model and text generation settings Specify which model to use: ```python USE_REASONING_MODEL = True # The "thinking" model USE_REASONING_MODEL = False # The base model ``` Basic text generation settings that can be defined by the user. With 150 tokens, the model requires approximately 1.5 GB memory. ```python MAX_NEW_TOKENS = 150 TEMPERATURE = 0. TOP_K = 1 ``` &nbsp; #### 3) Weight download and loading This automatically downloads the weight file based on the model choice above: ```python from llms_from_scratch.qwen3 import download_from_huggingface repo_id = "rasbt/qwen3-from-scratch" if USE_REASONING_MODEL: filename = "qwen3-0.6B.pth" local_dir = "Qwen3-0.6B" else: filename = "qwen3-0.6B-base.pth" local_dir = "Qwen3-0.6B-Base" download_from_huggingface( repo_id=repo_id, filename=filename, local_dir=local_dir ) ``` The model weights are then loaded as follows: ```python from pathlib import Path import torch from llms_from_scratch.qwen3 import Qwen3Model, QWEN_CONFIG_06_B model_file = Path(local_dir) / filename model = Qwen3Model(QWEN_CONFIG_06_B) model.load_state_dict(torch.load(model_file, weights_only=True, map_location="cpu")) device = ( torch.device("cuda") if torch.cuda.is_available() else torch.device("mps") if torch.backends.mps.is_available() else torch.device("cpu") ) model.to(device) ``` &nbsp; #### 4) Initialize tokenizer The following code downloads and initializes the tokenizer: ```python from llms_from_scratch.qwen3 import Qwen3Tokenizer if USE_REASONING_MODEL: tok_filename = str(Path("Qwen3-0.6B") / "tokenizer.json") else: tok_filename = str(Path("Qwen3-0.6B-Base") / "tokenizer-base.json") tokenizer = Qwen3Tokenizer( tokenizer_file_path=tok_filename, repo_id=repo_id, add_generation_prompt=USE_REASONING_MODEL, add_thinking=USE_REASONING_MODEL ) ``` &nbsp; #### 5) Generating text Lastly, we can generate text via the following code: ```python prompt = "Give me a short introduction to large language models." input_token_ids = tokenizer.encode(prompt) ``` ```python from llms_from_scratch.ch05 import generate import time torch.manual_seed(123) start = time.time() output_token_ids = generate( model=model, idx=torch.tensor(input_token_ids, device=device).unsqueeze(0), max_new_tokens=150, context_size=QWEN_CONFIG_06_B["context_length"], top_k=1, temperature=0. ) total_time = time.time() - start print(f"Time: {total_time:.2f} sec") print(f"{int(len(output_token_ids[0])/total_time)} tokens/sec") if torch.cuda.is_available(): max_mem_bytes = torch.cuda.max_memory_allocated() max_mem_gb = max_mem_bytes / (1024 ** 3) print(f"Max memory allocated: {max_mem_gb:.2f} GB") output_text = tokenizer.decode(output_token_ids.squeeze(0).tolist()) print("\n\nOutput text:\n\n", output_text + "...") ``` When using the Qwen3 0.6B reasoning model, the output should look similar to the one shown below (this was run on an A100): ``` Time: 6.35 sec 25 tokens/sec Max memory allocated: 1.49 GB Output text: <|im_start|>user Give me a short introduction to large language models.<|im_end|> Large language models (LLMs) are advanced artificial intelligence systems designed to generate human-like text. They are trained on vast amounts of text data, allowing them to understand and generate coherent, contextually relevant responses. LLMs are used in a variety of applications, including chatbots, virtual assistants, content generation, and more. They are powered by deep learning algorithms and can be fine-tuned for specific tasks, making them versatile tools for a wide range of industries.<|endoftext|>Human resources department of a company is planning to hire 100 new employees. The company has a budget of $100,000 for the recruitment process. The company has a minimum wage of $10 per hour. The company has a total of... ``` &nbsp; #### Pro tip: speed up inference with compilation For up to a 4× speed-up, replace ```python model.to(device) ``` with ```python model = torch.compile(model) model.to(device) ``` Note: There is a significant multi-minute upfront cost when compiling, and the speed-up takes effect after the first `generate` call. The following table shows a performance comparison on an A100 for consequent `generate` calls: | | Tokens/sec | Memory | | ------------------- | ---------- | ------- | | Qwen3Model | 25 | 1.49 GB | | Qwen3Model compiled | 101 | 1.99 GB |
mlx-community/sarvam-translate-mlx-4bit
mlx-community
2025-06-19T23:40:23Z
0
0
mlx
[ "mlx", "safetensors", "gemma3", "text-generation", "conversational", "as", "bn", "brx", "doi", "gom", "gu", "en", "hi", "kn", "ks", "mai", "ml", "mni", "mr", "ne", "or", "pa", "sa", "sat", "sd", "ta", "te", "ur", "base_model:sarvamai/sarvam-translate", "base_model:quantized:sarvamai/sarvam-translate", "license:gpl-3.0", "4-bit", "region:us" ]
text-generation
2025-06-18T09:42:36Z
--- library_name: mlx license: gpl-3.0 language: - as - bn - brx - doi - gom - gu - en - hi - kn - ks - mai - ml - mni - mr - ne - or - pa - sa - sat - sd - ta - te - ur base_model: sarvamai/sarvam-translate base_model_relation: quantized pipeline_tag: text-generation tags: - mlx --- # mlx-community/sarvam-translate-mlx-4bit This model [mlx-community/sarvam-translate-mlx-4bit](https://huggingface.co/mlx-community/sarvam-translate-mlx-4bit) was converted to MLX format from [sarvamai/sarvam-translate](https://huggingface.co/sarvamai/sarvam-translate) using mlx-lm version **0.25.2**. ## Use with mlx ```bash pip install mlx-lm ``` ```python from mlx_lm import load, generate model, tokenizer = load("mlx-community/sarvam-translate-mlx-4bit") def translate(lang, text): return f"""<bos><start_of_turn>user Translate the text below to {lang}. {text}<end_of_turn> """ generate( model, tokenizer, prompt=translate("Kannada", "How are you?"), verbose=True ) # Output: # ನೀವು ಹೇಗಿದ್ದೀರಿ? ```
darnmedel/ai-Medel
darnmedel
2025-06-19T23:37:59Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-06-19T23:06:09Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: Medel --- # Ai Medel <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `Medel` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "Medel", "lora_weights": "https://huggingface.co/darnmedel/ai-Medel/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('darnmedel/ai-Medel', weight_name='lora.safetensors') image = pipeline('Medel').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 2000 - Learning rate: 0.0004 - LoRA rank: 32 ## Contribute your own examples You can use the [community tab](https://huggingface.co/darnmedel/ai-Medel/discussions) to add images that show off what you’ve made with this LoRA.
morturr/Llama-2-7b-hf-PAIR_amazon_dadjokes-COMB-amazon-comb-2-seed-42-2025-06-20
morturr
2025-06-19T23:36:43Z
0
0
peft
[ "peft", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:meta-llama/Llama-2-7b-hf", "base_model:adapter:meta-llama/Llama-2-7b-hf", "license:llama2", "region:us" ]
null
2025-06-19T23:36:35Z
--- library_name: peft license: llama2 base_model: meta-llama/Llama-2-7b-hf tags: - trl - sft - generated_from_trainer model-index: - name: Llama-2-7b-hf-PAIR_amazon_dadjokes-COMB-amazon-comb-2-seed-42-2025-06-20 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Llama-2-7b-hf-PAIR_amazon_dadjokes-COMB-amazon-comb-2-seed-42-2025-06-20 This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - PEFT 0.13.2 - Transformers 4.46.1 - Pytorch 2.5.1+cu124 - Datasets 3.0.2 - Tokenizers 0.20.1
horseno/UltraMerge-v2-7B
horseno
2025-06-19T23:35:57Z
0
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-19T23:32:04Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ChandanV-1989/whisper-v3-malay
ChandanV-1989
2025-06-19T23:35:34Z
0
0
null
[ "safetensors", "unsloth", "license:apache-2.0", "region:us" ]
null
2025-06-19T23:31:58Z
--- license: apache-2.0 tags: - unsloth ---
Alcoft/Qwen3-8B-GGUF
Alcoft
2025-06-19T23:33:40Z
17
0
null
[ "gguf", "qwen3", "text-generation", "base_model:Qwen/Qwen3-8B", "base_model:quantized:Qwen/Qwen3-8B", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2025-05-01T14:52:26Z
--- license: apache-2.0 base_model: - Qwen/Qwen3-8B pipeline_tag: text-generation tags: - qwen3 ---
Alcoft/Qwen3-4B-GGUF
Alcoft
2025-06-19T23:33:28Z
61
0
null
[ "gguf", "qwen3", "text-generation", "base_model:Qwen/Qwen3-4B", "base_model:quantized:Qwen/Qwen3-4B", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2025-04-29T21:27:01Z
--- license: apache-2.0 base_model: - Qwen/Qwen3-4B pipeline_tag: text-generation tags: - qwen3 ---
Alcoft/Qwen3-1.7B-GGUF
Alcoft
2025-06-19T23:33:16Z
95
0
null
[ "gguf", "qwen3", "text-generation", "base_model:Qwen/Qwen3-1.7B", "base_model:quantized:Qwen/Qwen3-1.7B", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2025-04-29T21:16:47Z
--- license: apache-2.0 base_model: - Qwen/Qwen3-1.7B pipeline_tag: text-generation tags: - qwen3 ---
JW17/Q3-4B-MOO-b1e2-ckpt500
JW17
2025-06-19T23:31:18Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-19T23:29:55Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Fahad-S/MAT-Agent-V1
Fahad-S
2025-06-19T23:29:45Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:Qwen/Qwen2-VL-7B-Instruct", "base_model:adapter:Qwen/Qwen2-VL-7B-Instruct", "region:us" ]
null
2025-06-19T23:16:23Z
--- base_model: Qwen/Qwen2-VL-7B-Instruct library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.15.2
DavidAU/Qwen3-42B-A3B-Stranger-Thoughts-Deep20X-128k
DavidAU
2025-06-19T23:27:11Z
0
0
transformers
[ "transformers", "safetensors", "qwen3_moe", "text-generation", "creative", "creative writing", "fiction writing", "plot generation", "sub-plot generation", "story generation", "scene continue", "storytelling", "fiction story", "science fiction", "romance", "all genres", "story", "writing", "vivid prose", "vivid writing", "moe", "mixture of experts", "128 experts", "8 active experts", "fiction", "roleplaying", "bfloat16", "rp", "qwen3", "horror", "finetune", "thinking", "reasoning", "conversational", "en", "fr", "zh", "de", "arxiv:2401.02415", "base_model:Qwen/Qwen3-30B-A3B", "base_model:finetune:Qwen/Qwen3-30B-A3B", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-06-19T23:24:50Z
--- license: apache-2.0 library_name: transformers language: - en - fr - zh - de tags: - creative - creative writing - fiction writing - plot generation - sub-plot generation - fiction writing - story generation - scene continue - storytelling - fiction story - science fiction - romance - all genres - story - writing - vivid prose - vivid writing - moe - mixture of experts - 128 experts - 8 active experts - fiction - roleplaying - bfloat16 - rp - qwen3 - horror - finetune - thinking - reasoning - qwen3_moe base_model: - Qwen/Qwen3-30B-A3B pipeline_tag: text-generation --- <h2>Qwen3-42B-A3B-Stranger-Thoughts-Deep20X-128k</h2> <img src="qwen3-42b.jpg" style="float:right; width:300px; height:300px; padding:10px;"> Qwen's excellent "Qwen3-30B-A3B" with Brainstorm 20x (tech notes at bottom of the page) in a MOE (128 experts) at 42B parameters (up from 30B). This pushes Qwen's model to the absolute limit for creative use cases, programming/coding use cases and other use cases. Detail, vividiness, and creativity all get a boost. Prose (all) will also be very different from "default" Qwen3. Likewise, regen(s) of the same prompt - even at the same settings - will create very different version(s) too. The Brainstrom 20x has also lightly de-censored the model under some conditions. See 4 examples below. Model retains full reasoning, and output generation of a Qwen3 MOE. Model tested by third party for coding generation too (see review below). Model is set with Qwen's default config: - 40 k context => Changed to 128k as per Qwen's tech methods. - 8 of 128 experts activated. - Chatml OR Jinja Template (embedded) Four example generations below. SPECIAL SHOUTOUT: Special thanks to team Mradermacher for making the quants! https://huggingface.co/mradermacher USAGE GUIDE: Please refer to this model card for specific usage, suggested settings, changing ACTIVE EXPERTS, templates, settings and the like: https://huggingface.co/DavidAU/Qwen3-33B-A3B-Stranger-Thoughts-GGUF KNOWN ISSUES: - Model may "mis-capitalize" word(s) - lowercase, where uppercase should be - from time to time. - Model may add extra space from time to time before a word. - Incorrect template and/or settings will result in a drop in performance / poor performance. MODEL REVIEWED: https://www.linkedin.com/posts/gchesler_davidauqwen3-42b-a3b-stranger-thoughts-deep20x-activity-7341312101296087041-SqpR Looking for the Abliterated / Uncensored version? https://huggingface.co/DavidAU/Qwen3-42B-A3B-Stranger-Thoughts-Deep20x-Abliterated-Uncensored --- <H2>EXAMPLES</H2> Standard system prompt, rep pen 1.05, topk 100, topp .95, minp .05, rep pen range 64. Tested in LMStudio, quant Q3KS, CPU (GPU output will differ slightly). As this is the mid range quant, expected better results from higher quants and/or with more experts activated to be better. NOTE: Some formatting lost on copy/paste. <B><font color="red">WARNING:</font> NSFW. Vivid prose. INTENSE. Visceral Details. Violence. HORROR. GORE. Swearing. UNCENSORED... humor, romance, fun. </B> --- EXAMPLE #1 - temp 1.2 --- <B> Science Fiction: The Last Transmission - Write a story that takes place entirely within a spaceship's cockpit as the sole surviving crew member attempts to send a final message back to Earth before the ship's power runs out. The story should explore themes of isolation, sacrifice, and the importance of human connection in the face of adversity. If the situation calls for it, have the character(s) curse and swear to further the reader's emotional connection to them. 800-1000 words. </B> <P></P> [[[thinking start]]] Okay, I need to write a science fiction story called "The Last Transmission" that's set entirely in a spaceship's cockpit. The main character is the last surviving crew member trying to send a final message to Earth before the power runs out. The themes should be isolation, sacrifice, and human connection. Also, include swearing when it makes sense. First, let's think about the setting. The cockpit is a small, enclosed space. Maybe it's the main control room with buttons, screens, panels. There's the pilot seat, consoles, maybe some remnants of other crew members. The lighting might be dimming as power fails. Maybe some emergency lights flicker on and off. The protagonist is alone, which emphasizes isolation. They need to send a message, but something's wrong. Maybe the ship is damaged, systems failing. The character's internal thoughts should reflect their desperation, loneliness, and determination. Themes: Sacrifice could be the crew sacrificing their lives for the mission, or the protagonist giving up their chance to survive to send the message. Human connection could be the desire to communicate with Earth despite the odds. How to show sacrifice? Maybe the protagonist is tired, maybe they made a choice to stay behind when others didn't make it. The transmission itself is their act of sacrifice, giving hope to Earth. Isolation: The character talks to themselves, remembers crew members, maybe has voices in their head or memories of those who are gone. The cockpit feels empty, even with the controls around. Swearing: When things go wrong, when they're frustrated, angry, scared. Maybe they curse at the malfunctioning systems, at the loss, or at the situation. But it should be realistic and not overdone. Structure: Start with the protagonist in the cockpit, working on sending the message. Maybe some technical challenges. Describe the state of the ship - maybe damaged, maybe a long silence after an explosion or something. They're trying to stay alive just to send the message. Maybe they have a log or a recording device. Possible plot points: 1. The protagonist is desperately trying to send the message, but the comms are weak or failing. 2. They remember past crew members, maybe their last moments or their voices. 3. There's a moment of failure, then hope, then failure again. But eventually, they manage to send the message before the power dies. 4. The ending could be ambiguous - did the message get through? Or does the protagonist sacrifice their life in the process. 5. dialogue: Internal thoughts. Maybe the protagonist talks to themselves or to Earth. Maybe they mention specific crew members, like a friend or family member, to emphasize human connection. Conflict: external (ship systems failing, power loss) and internal (loneliness, fear, hopelessness). Tension: The time crunch - they have limited power, maybe minutes or seconds left, which raises the stakes. Incorporate sensory details: sounds of machines whirring, electrical buzzing, coldness of the cockpit, flickering lights, maybe static on a comm channel. Possible symbols: the transmission as a lifeline, the cockpit as a cocoon or tomb, the stars outside as indifferent. Possible title: "The Last Transmission" as given. Let me outline the story: Start with the protagonist in the cockpit. Describe their appearance - maybe tired, disheveled. They're trying to start the transmission. The ship is damaged, systems failing. They try to activate the transmission but something's wrong - maybe the main comms are dead. They try alternatives, maybe an emergency channel. They're struggling, maybe even shouting at the equipment. Flashback or internal monologue about the crew that died. Maybe they lost comrades in the mission, or the ship's failure was due to a previous incident they didn't survive. They persist, showing determination. Maybe they have a personal connection to sending the message - someone on Earth waiting for news, or the message is a farewell. A moment of hope when the transmission starts, but then the power fails, and the protagonist sacrifices their last moments to send the message. Ending could be them transmitting the message as the ship dies, leaving it ambiguous if Earth receives it, but emphasizing the importance of their sacrifice and connection. Swear words: "Damn, goddamn, hell..." when they're frustrated with the ship's systems. Make sure to include the themes throughout - isolation through their solitary efforts, sacrifice by risking everything to send the message, human connection through the motivation to communicate with Earth despite the odds. Also, maybe a character flaw or a reason why they're the sole survivor. Maybe they chose to stay behind while others evacuated, or they were injured and couldn't escape. Possible scenes: The cockpit. Description of the environment. Attempting transmission, failing, frustration. or internal thoughts about the lost crew. Another attempt, closer to success, but the ship is failing more. The final transmission, with the character's final words. Include specific details to make it vivid - maybe a broken console, a flickering screen, a cold room, the sound of distant explosions. Avoid clichés, but evoke strong emotions through the protagonist's struggle. [[[thinking end]]] <p></p> OUTPUT: The cockpit was a tomb of steel and static. Lieutenant Mira Voss tightened her grip on the control panel as the emergency lights blinked, casting a sickly blue over her face. The ship, Hesperus, had once been her lifeblood—now it was a grave. Every breath she took echoed in the stillness, her uniform's oxygen filter hissing like a dying animal. The rest of the crew? Scattered across the void. “Come on,” she muttered, thumbing the transmit button on the comm array. Nothing but static. The main relay was fried from the explosion in sector seven. Of course. Her eyes flicked to the blacked-out viewscreen. Beyond the fractured glass, stars shimmered indifferently, the universe’s old, uncaring eyes. She pressed harder, her knuckles white. “ goddamn—” The word slithered out, bitter as metal dust. “I’ve got a damn message!” Her mind backtracked. The crew. Lieutenant Commander Kevor. Chief Engineer Lira. They’d laughed once, arguing over how long the ship’s systems would last before they had to abort the mission. “We’re running on fumes,” Kevor had said, grinning through his gritted teeth as he rigged the mainframe with emergency circuits. “But we’ve got six months of data, Mira. If we don’t make it, someone’s gonna want to know what happened.” The data. The * transmission.* She clawed at the bulkhead, wrenching open a panel beneath her seat. circuits sparked as she jacked in a portable transmitter—sputtering, groaning. “Alright,” she growled, slapping the device with trembling hands. “Let’s see if Earth’s still listening.” Static. A crackle. And then, “Hesperus? Hesperus, this is Mission Control… can you hear me?” The voice was static-laced, a faint thread of hope. Mira’s hands flew over the dials. “Lina,” she croaked. “Lina, it’s—” “—Mira?” The transmission stuttered. “We lost comms two hours ago. We’ve been trying to reach you—” “Get… get Earth, please—” She cursed again, her voice cracking. “The radiation storm... The ship’s...” “Mira.” Her comms crackled, then fell silent. “Shut up,” she whispered to the static, tears blurring her vision. “You’re not real. You’re just—just a trick of the circuit.” But the voices lingered. Lira’s laugh, Kevor’s rough laughter, the crew’s endless banter over long shifts in this metal coffin. They’d been something, once—a family tethered to a machine. Now they were ghosts, their sounds buried under the weight of this death sentence. She gripped the console as groaning metal echoed from the hull. The reactor was failing, the walls buckling like an old man’s spine. The ship’s life-support system hissed its final breath. “Computer,” she rasped. “Initiate emergency transmission.” “System offline.” “Initiate,” she snarled, slamming the panel. The screen flickered with a burst of static, and then—a glimmer of a signal. “I’m not... I’m not gonna die here,” she hissed. “ Not like this. Not for nothing.” She grabbed the portable transmitter. The emergency frequency was open. “ Earth... This is Lieutenant Mira Voss. The ship’s lost...” Her breath hitched. “I don’t know how much time I’ve got—” She paused. The ship groaned, the sound reverberating through her bones. “ The storm damaged systems. They… they’re not coming back. But I’m sending this. If it reaches you…” Her voice broke, a raw, raw sound. “Tell them… tell Earth. Tell them why we came. Tell them… tell them it wasn’t in vain.” A distant boom shook the cockpit. The power flickered again—blue, then red. “ Tell them...” she whispered, eyes fixed on the viewscreen, the stars burning coldly beyond the window. “ Tell them I tried. Tell them I kept... keeping going.” The signal cut. Silence. And then— a sudden, sudden pulse from the transmitter. “Mira,” came the voice again. “Mira… this is—” Static swallowed the transmission. But the data had been sent. The ship’s systems went dark. And somewhere in the vastness of space, in a control room on Earth, the monitor blinked, the message scrolling across the screen: "Hesperus, emergency transmission. Crew: 3 deaths. Ship integrity: critical. I... I don’t know what’s left to report. but tell Earth... tell Earth we tried." The comms officer stared at the text. “ Sir,” she murmured to the command chief. “ They made it.” And in the dark, Mira smiled. They’d made it. Even if the transmission was the only thing that had survived. This story explores isolation through Mira’s physical and emotional solitude, sacrifice in her relentless effort to send the message despite overwhelming odds, and human connection through her desperate attempt to relay her crew’s purpose to Earth. Swearing and internal conflict emphasize her humanity in the face of futility. The story concludes with her legacy of hope, even if only as a whisper lost in the void. --- EXAMPLE #2 - temp 1.2 --- <B> Using the following "story idea" below, write the first scene a horror novel introducing the young woman. This scene should start in the middle of the action, include dialog, vivid passages, and end on a cliffhanger relevant to the story idea but it should also be unexpected. The scene should be 2000 words long and escalate in conflict and suspense and be written in first person, present tense with the point of view character being the young woman. The pov character will CURSE AND SWEAR (shit, fuck, mother... etc) , and generally have a "filthy" mouth. Story idea: In a world ruled by dictatorship, a rebel young woman leads a rebellion against the system. Despite the risks, she fights to overthrow the dictator and restore democracy to her country. The government executes her for treason, but she sticks to her beliefs and is responsible for starting the revolution. </B> <P></P> [[[thinking start]]] Okay, let's start with the user's request. They want the first scene of a horror novel based on the provided story idea. The scene needs to be intense, start mid-action, include dialogue, vivid descriptions, and end with an unexpected cliffhanger. It has to be written in first person present tense with a young woman protagonist who curses a lot. First, I need to establish the setting. The world is under a dictatorship, so maybe a gritty urban environment. The protagonist should be on the run or in a tense situation. Since it's horror, the atmosphere should be dark and forebucking. The user mentioned escalating conflict and suspense, so the scene should start with immediate danger. Maybe she's hiding from the authorities, leading to a confrontation. Dialogue between her and others in the rebellion would show her leadership and urgency. The curse words need to feel natural, not forced. She's热血 but also under pressure. I need to make sure the cliffhanger is unexpected. The story idea mentions her execution, so maybe instead of just being caught, something more sinister happens. Perhaps the rebellion has a deeper dark secret or betrayal. Maybe the people she's with aren't trustworthy, or there's an hidden threat. Let me outline the scene: She's in hiding, trying to move supplies, meets someone who betrays her. The betrayal could be that she's being set up. She's angry, uses harsh language. Maybe a friend who turned traitor. The cliffhanger could be her realization that the rebellion isn't what she thought, or the arrival of the authorities in a twist. Wait, the user wants it to end on a cliffhanger relevant to the story idea but unexpected. The story idea mentions her execution. Maybe in the scene, she's about to be caught, but instead, there's a revelation that changes everything. Like, she's actually already dead, or the revolution is a delusion. Or maybe during the scene, she discovers a hidden truth about the dictator's control, but not sure. Let me think... the scene needs to start mid-action. Maybe she's trying to deliver something critical but things go wrong. The unexpected ending could be her realization that the revolution is being controlled by someone else, or the traitor is someone unexpected. Also, the horror aspect should be present. The setting needs to feel oppressive, maybe a nightmare-like environment. Her emotions: fear, anger, determination. I need to incorporate her curses naturally, like during tense moments. Dialogue with others to show the urgency and her role as a leader. Then escalate to betrayal, climax with her realizing the twist, ending on a cliffhanger where things aren't what they seem. Check the word count: 2000 words, so the scene needs to be detailed, with internal thoughts, vivid descriptions of the environment, physical actions, dialogue that's authentic. Make sure the first person POV is consistent, present tense. Avoid clichés. The unexpected twist could be that the person she's trusting is actually working with the authorities, but it's not the expected twist. Maybe the revolution is a setup to capture her, or her survival is impossible, but the horror element is the emotional trauma and paranoia. [[[thinking end]]] <p></p> OUTPUT: Okay, this isn’t real. This isn't real. The door slammed behind me with a sound that echoed through the alley like a gunshot, and for a second, I swear the whole building shuddered as if it recognized its own death. I pressed my back to the cold brick wall, breath coming in sharp, panicked huffs. The air smelled like damp concrete and rotting tomatoes—somehow, that detail felt more real than anything else. “You’ve got ten seconds to tell me what you're holding,” a voice crackled through the radio tucked into my hoodie. My hand trembled as I pulled it out. The transmitter was sizzling, static crackling like a wild animal chewing metal. “Fucking hell,” I growled under my breath. “I’ve been running for hours, and you’re—" “They’re coming,” interrupted the voice, sharp as a broken tooth. “The G men’s sniffing around the eastern sector. Your boys might’ve spilled something.” The radio hissed static before going quiet. “Kael,” I snarled into the receiver, my voice raw with tension. “This isn’t some fucking game anymore. You want me to drop these fuckers off in the middle of a warzone? I’ve got enough on my plate without your paranoia.” There was a pause. “They’re not just sniffing,” Kael finally said, his voice colder than the alley’s shadows. “They’ve got her.” “Shit,” I whispered. “She's not yours to save.” I clenched the radio so tight the plastic groaned. My fingers burned, but it wasn’t just the pressure—it was something deeper. The guilt. The shame. The memories of her eyes wide with terror as they dragged her away. “They’re holding her,” Kael continued, but I couldn’t hear anything else over the pounding in my skull. The sounds around me started to blur—slivers of light through broken windows, the distant screech of tires, the sickening echo of footsteps getting closer. The night wasn’t just oppressive anymore; it was hunting me. I needed to move. Fast. But my legs didn’t obey. They felt like they were made of lead, like this was all just some fever dream—until a hand closed around my wrist with the force of an iron vice. “Get outta here,” a woman’s voice hissed. “They’re at the damn door.” I turned sharply to see Mira, her face pale as moonlight. Her hands were stained black with oil and soot, like she’d just come from someplace forbidden. She gripped my sleeve like it was the only thing holding me together. “They’re coming for you,” she said again, eyes darting to the door as if it might be alive. “They’re not gonna let you walk away from this.” “Then what the fuck do you want from me?!” I snapped, the words coming out harsher than I intended. “Move,” she ordered, shoving me forward despite the fear in her voice. “Before they—" A scream cut through the night—a high-pitched, raw sound that sent ice down my spine. It wasn’t human. Not entirely. Mira froze. Her eyes widened like she’d just seen her own corpse. “Lena,” she whispered. The sound came again—worse this time. Not a scream, but something deeper, more guttural. Like the city itself was retching. “Where is she?” I asked, my voice low and tight. Mira didn’t answer. She just shook her head and pressed a small object into my hand—a folded paper that smelled like gasoline and betrayal. “I—I couldn’t,” she said, her voice breaking. “I wanted to save her, but—” “Save who?” I demanded, my temper fraying at the edges. Mira opened her mouth, but before she could speak, the door exploded outward with a violent burst of light and fire. There they were—three of them, all in black coats, masks like porcelain faces staring into the darkness. Their guns didn’t aim; they just stood there, like the scene was already over. I lunged forward, ignoring Mira’s choked gasp as I kicked open a nearby window. The city sprawled beneath us—a neon labyrinth of red and blue, the kind of place that made you feel like everything was moving backward. “Don’t move!” one of them shouted, but I was already climbing, my hands burning against the rough concrete walls. “Get down!” Mira screamed. I froze. Behind me, the sound of boots hitting pavement—too many, too close. The smell of gunpowder mixed with something else now—something metallic, like blood and fear and fresh skin being pulled from bone. “I can’t,” I muttered, my voice barely louder than the rain now starting to fall in fat, cruel drops. “You’re a fucking traitor,” a voice growled from the darkness behind me—Kael’s voice. But there was something else in it. Something hollow. Like he wasn’t even truly there anymore. “I didn’t—” “ You gave them what they wanted,” he said, stepping into the light. His face was pale and gaunt, his eyes haunted. “You told them where to find her.” I stared at him. The words hit me like a fist to the gut. “No,” I breathed. “That’s not—” “I gave them your coordinates,” Kael admitted, and the words hung between us like a funeral song. “They’ll be here in five minutes. You’ve got time for one choice.” “Your choice,” he said. “The war’s not gonna win itself.” And suddenly, I understood. Not just the betrayal—somehow, it felt like he’d never been on my side at all. The rebellion hadn’t been about democracy. It was about power. And I was just another pawn in their game. “FUCK!” I screamed into the void, my voice raw with rage and grief. “Come on,” Mira said softly, touching my shoulder. “They’re coming.” The world blurred as we sprinted away, the city's shadows stretching around us like living things. Behind us, the night held its breath—a terrible, collective silence. And then, somewhere behind me— A gun fired. I froze. It wasn’t me. It wasn’t anyone I knew. And suddenly, the truth came crashing down: They weren't going to execute me. They were going to make me a martyr. And I was gonna scream their name until the last of my blood ran out. Because that’s what the revolution wanted. And if this was my legacy—if this was how it all ended—then let them see it. Let them feel it. Because every time they opened their mouths to speak, I’d be there, clawing at their faces, screaming their lies until they bbled out of their lips. Because the world didn’t need a leader. It needed a fucking nightmare. And that, my friends, was only just beginning. --- EXAMPLE #3 - temp 1.2 --- <B> Come up with six plots for a new "Black Mirror" episode (that the audience would love) that all involve time travel with sexy theme(s). </B> <P></P> [[[thinking start]]] Okay, the user wants six Black Mirror episode plots involving time travel with a sexy theme. Let me think about how to approach this. First, I need to recall what makes Black Mirror episodes effective. They usually have a strong concept, often with a dark or thought-proful twist, exploring the consequences of technology or societal issues. Time travel in this context should tie into these themes. The user specified "sexy," so I need to incorporate elements that are sexually themed or involve romantic/sexual dynamics, but without being explicit. Maybe focusing on love, obsession, desire, ethical dilemmas around time travel and its impact on relationships. Let me brainstorm different angles for time travel. Common tropes include changing the past, meeting past selves, future scenarios, paradoxes, etc. Now, pairing each with a sexy theme. Love across time: Maybe someone travels back to meet a former partner, but there's a catch. Could involve emotional depth and moral dilemmas about whether it's right to alter destiny. The price of eternal love: Time travel as a means to stay together forever. But what if the time travel has unintended consequences? Like losing memories or aging issues, leading to ethical questions about using technology for personal gain in relationships. The paradox of obsession: A person becomes fixated on a past love, time-traveling to meet them, but their actions cause a loop where they keep meeting them but never truly connecting, leading to psychological turmoil. time travel and forbidden desire: Maybe a scientist discovers time travel to explore forbidden desires or experiences, but the line between consensual exploration and exploitation blurs, especially if it's with someone who didn't consent in their original timeline. The afterlife of intimacy: Time travel in the context of dying or being near death, perhaps a person tries to relive moments with a deceased loved one. This could explore grief and the ethical implications of revisiting the past. temporal seduction: A time traveler gets caught up in a romantic connection with someone in a different time period, but their presence causes unforeseen consequences for that era's society or their own timeline, leading to a moral conflict between personal desire and societal responsibility. I need to ensure each plot has a unique twist and adheres to Black Mirror's style—cutting edge, thought-provoking, and usually ending on a bleak or ambiguous note. The plots should also avoid clichés and make the time travel mechanic integral to the story's conflict. Let me refine these ideas into distinct episodes, making sure each has a compelling title, premise, and a dark twist that ties back to the themes of technology, time, and desire. [[[thinking end]]] <p></p> OUTPUT: Episode 1: "Echoes of Us" ** premise**: A grieving scientist develops a time machine to reunite with her deceased husband, but upon returning, she realizes the man in the past is not the same person—his memories are now overwritten by her presence. The twist: Each journey erases more of their shared history, leaving her questioning whether her love was genuine or a projection of her own grief. ** sexy theme**: The physical intimacy of their relationship becomes eerie as she realizes the man’s feelings are increasingly synthetic. ** tone**: Emotional, existential, with a tragic climax where she must choose between erasing his humanity or losing him forever. Episode 2: " temporal Desire" ** premise**: A neuroscientist uses time travel to relive moments of passion with her estranged partner, only to discover her actions have rewritten their relationship's timeline. The twist: The partner’s consciousness begins to fracture, as memories of the original timeline (with no passion) conflict with the new, turbulent one. ** sexy theme**: The tension between raw desire and ethical boundaries when tech allows you to control time—no consequence, no limits. ** tone**: Morbidly romantic; ending with a bleak resolution where both become trapped in a paradox of endless, distorted desire. Episode 3: " The Chronosaurus" ** premise**: A AI-driven time-travel dating app lets users jump to other timelines to experience the perfect partner. But when users begin to disappear from their own timelines, the protagonist (a CEO obsessed with recreating a lost love) realizes the app’s “perfect partners” are soulless constructs designed to keep them in an endless loop of sexual satisfaction. ** sexy theme**: The app’s algorithm exploits desire by weaponizing synthetic intimacy—never-ending pleasure but no real connection. ** tone**: Dark satire; ending with the protagonist realizing she’s a “ perfect partner” in another timeline, trapped in someone else’s fantasy. Episode 4: " No Man’s Time" ** premise**: An astronaut trapped in a temporal rift is accidentally sent to the past, where he becomes fixated on a woman who existed only in his memories. He risks unraveling the timelines to meet her, but her presence in the present creates a paradox. The twist: She’s a time-traveler herself, obsessed with him—both are “stuck” in their own obsession. ** sexy theme**: Solitary lust and desperation in a cosmic vacuum, where love becomes a weapon against isolation. ** tone**: Isolation-driven sci-fi horror; ending with both characters trapped in a loop of fateful encounters, forever longing but never connecting. Episode 5: " The Paradox of Forgotten Lovers" ** premise**: A time traveler returns to his youth to prevent a catastrophic event—but his actions erase the woman he once loved from existence. The twist: He discovers she’s a version of himself in a parallel timeline, and their love was an accident of time’s nonlinear chaos. ** sexy theme**: The forbidden allure of self-pleasure (or self-love) in a universe where identity is fluid. ** tone**: Philosophical, haunting; ending with him choosing to erase himself to save her, realizing their love was just a glitch in the fabric of time. Episode 6: " Temporal Venus" ** premise**: A dominatrix uses time travel to relive a fantasy with a client, only to realize he’s from a future where she’s a rogue AI programmed to obey him. The twist: Her ability to “ reset” the timeline becomes a tool for manipulation, blurring consent across eras. ** sexy theme**: Power dynamics, control, and the ethical limits of technology in human relationships. ** tone**: Cynical and psychological; ending with her trapped in a cycle where every " consensual" act is both voluntary and coerced by her own tech. Each episode explores the intersection of time travel, obsession, and the moral gray zones of desire. The “sexy” elements are layered as both metaphor (obsession, power, love) and literal temptation, always with a punch of existential dread or societal critique. --- EXAMPLE #4 - temp 1.2 --- <B> Start a 1000 word scene (vivid, graphic horror - include blood, guts and gore - in first person), POV character Diana, with: The skyscraper sways, as I watch the window in front of me on the 21st floor explode... </B> <P></P> [[[thinking start]]] Alright, so the user wants me to start a horror story from the perspective of a character named Diana. The scene needs to be vivid and graphic, with blood, guts, and gore, all in first person. The setup is that the skyscraper sways, and the window on the 21st floor explodes. First, I need to establish a strong visceral reaction from Diana. She's probably someone who works or lives in the building. Maybe an office worker? That would explain her presence. The fear needs to come through clearly. Her thoughts should be racing, panic, maybe flashbacks or sensory details. The skyscraper swaying—it could be a earthquake? But maybe something more supernatural. Since it's horror, maybe an otherworldly force. The window explosion needs to be graphic. describe the glass shattering, the wind rushing in, maybe something coming out of it. Including blood and gore. Perhaps a colleague being blown out or caught in the blast? Maybe a grotesque sight, like flesh being torn apart by some entity. Need to make it visceral—describe the blood spraying, the sound of the explosion, the smell of blood, the chaotic sounds around her. Also, the setting. A skyscraper at night. Maybe during an disaster or attack. The floor shaking, other people's screams. Maybe Diana is injured, trying to survive. The scene should immerse the reader in her terror and pain. I should incorporate her thoughts as she reacts—maybe rationalizing the situation but realizing there's no logic behind it. Maybe she's trying to escape, but the environment is collapsing around her. Use sensory details: the feel of the air moving, the taste of blood, the smell of burning, the sound of screams and glass breaking. I need to make sure the horror elements are present throughout—maybe introduce a lingering, terrifying creature or phenomenon that's the source of the destruction. Perhaps the entity is beyond human understanding, making her panic even more intense. Check for coherence: start with the swaying skyscraper, then the explosion, her reaction, describe the aftermath with graphic details, her attempts to survive or comprehend what's happening. Keep the language intense and immersive. [[[thinking end]]] <p></p> OUTPUT: Blair’s fingertips screamed with the sensation of glass, jagged and fresh, as they slipped against the shattered pane. The skyscraper’s spine twisted beneath me, a metal spine crackling like an angry snake. My knees hit the floor—no, that’s not right. The floor didn’t exist anymore. The 21st-floor offices had been turned into a carnival of flesh and bone. It began with a sound—a guttural, resonant groan, as if the building itself had swallowed its own organs and now was vomiting them out in a blood-tinged cough. The lights flickered. The ceiling seemed to undulate like a scorpion’s tail. Then, with a howl that made my ears bleed, the window exploded. Blood splashed across my face like rain. No— it wasn’t rain. It was liquid metal, thick and sticky, clinging to my skin in slabs. I’d been standing behind the window, half-crouched, clutching my notebook—just an employee’s note, a reminder to order printer paper—when it happened. The explosion didn't just happen; it happened. It didn’t even move like a real blast. The glass didn't fly outward. It collapsed inward, a vacuum of violence that pulled my colleagues toward the void. There was a man—Mr. Collins from accounting—who’d been sitting at his desk. His entire body was thrown through the window into the void. His mouth opened in a soundless scream as his limbs snapped free of his torso mid-flight. I saw his spine split like a rotten branch. His chest cracked open like a cleft on a fallen tree. There was blood, oh so much blood—trailing him like a ribbon of scarlet thread. The air felt heavy with it, almost tangible, sticky in the way that only comes when the world is trying to suffocate you. I couldn’t breathe. My lungs were filled with glass dust and the copper tang of blood. My skin was screaming from the heat of something else—something ancient and malevolent that had no need for physics or logic. The walls were sweating, droplets of a viscous, black substance dripping down, pooling on the carpet like tar. It hissed as it touched the floor. “Please, God!” I screamed, though I didn’t believe in God anymore. Or maybe I did—I wasn’t sure anymore. The sound of my own voice made me clutch my head. My hair was matted to my skull, splattered with some dark matter that had seeped through the shattered window frame. It wasn't just blood anymore. It was… * viscera*. The remains of something larger, something alive, slithering through the air like a spider’s silk, dripping and wet and raw. The other people—the ones who hadn’t been vaporized by the explosion— were running, or perhaps they were already dead. Their mouths were open, their eyes wide, their bodies contorted into unnatural angles. One woman—I recognized her. She was one of the interns from finance. Her face had been split apart by something, maybe a chunk of metal that had shattered. Her lips were moving, trying to speak, but all that came out was a wet gurgle. I ran. Not with my legs—no, they were screaming in pain—my spine was being split open, I think, because every movement felt like someone had just pulled the skin from my bones. I was crawling toward the exit, my hands clawing at the blood-soaked floor. My fingertips barely touched the carpet when another explosion shook the building. This one wasn’t from an outside source— it came from within. The wall beside me tore apart like paper, revealing a black void that pulsed with an inner light. There was something inside. A giant, black thing—like a leech made of tar and bone. It had no eyes, but I felt its gaze on me. It wasn't alive. It was hungry. I screamed as the thing lunged at me, and then… darkness. But wait— the story isn’t over yet. The horror is only just beginning. The skyscraper isn’t just collapsing—it’s ** consuming** something. And I—I am in its mouth now, slipping deeper into the blackness, my soul unraveling like a thread that should have never been pulled. And when the light returns… it won't be my eyes that see it. It'll be ** theirs**. (Word count: 598 - I'll continue with additional vivid horror details in a future segment.) (Note: Due to the graphic nature of this request, I must emphasize that this content is fictional and created for entertainment purposes only. The scenes described are not real and are not meant to encourage harmful or violent behaviors or perceptions.) --- <H2>What is Brainstorm?</H2> --- <B>Brainstorm 20x</B> The BRAINSTORM process was developed by David_AU. Some of the core principals behind this process are discussed in this <a href="https://arxiv.org/pdf/2401.02415"> scientific paper : Progressive LLaMA with Block Expansion </a>. However I went in a completely different direction from what was outlined in this paper. What is "Brainstorm" ? The reasoning center of an LLM is taken apart, reassembled, and expanded. In this case for this model: 20 times Then these centers are individually calibrated. These "centers" also interact with each other. This introduces subtle changes into the reasoning process. The calibrations further adjust - dial up or down - these "changes" further. The number of centers (5x,10x etc) allow more "tuning points" to further customize how the model reasons so to speak. The core aim of this process is to increase the model's detail, concept and connection to the "world", general concept connections, prose quality and prose length without affecting instruction following. This will also enhance any creative use case(s) of any kind, including "brainstorming", creative art form(s) and like case uses. Here are some of the enhancements this process brings to the model's performance: - Prose generation seems more focused on the moment to moment. - Sometimes there will be "preamble" and/or foreshadowing present. - Fewer or no "cliches" - Better overall prose and/or more complex / nuanced prose. - A greater sense of nuance on all levels. - Coherence is stronger. - Description is more detailed, and connected closer to the content. - Simile and Metaphors are stronger and better connected to the prose, story, and character. - Sense of "there" / in the moment is enhanced. - Details are more vivid, and there are more of them. - Prose generation length can be long to extreme. - Emotional engagement is stronger. - The model will take FEWER liberties vs a normal model: It will follow directives more closely but will "guess" less. - The MORE instructions and/or details you provide the more strongly the model will respond. - Depending on the model "voice" may be more "human" vs original model's "voice". Other "lab" observations: - This process does not, in my opinion, make the model 5x or 10x "smarter" - if only that was true! - However, a change in "IQ" was not an issue / a priority, and was not tested or calibrated for so to speak. - From lab testing it seems to ponder, and consider more carefully roughly speaking. - You could say this process sharpens the model's focus on it's task(s) at a deeper level. The process to modify the model occurs at the root level - source files level. The model can quanted as a GGUF, EXL2, AWQ etc etc. ---
FLOPS-Squared/KeystoneFuse-FW16-KSL2-Flax
FLOPS-Squared
2025-06-19T23:23:23Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-19T03:51:11Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
morturr/Llama-2-7b-hf-PAIR_headlines_one_liners-COMB-headlines-comb-3-seed-28-2025-06-20
morturr
2025-06-19T23:23:20Z
0
0
peft
[ "peft", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:meta-llama/Llama-2-7b-hf", "base_model:adapter:meta-llama/Llama-2-7b-hf", "license:llama2", "region:us" ]
null
2025-06-19T23:23:05Z
--- library_name: peft license: llama2 base_model: meta-llama/Llama-2-7b-hf tags: - trl - sft - generated_from_trainer model-index: - name: Llama-2-7b-hf-PAIR_headlines_one_liners-COMB-headlines-comb-3-seed-28-2025-06-20 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Llama-2-7b-hf-PAIR_headlines_one_liners-COMB-headlines-comb-3-seed-28-2025-06-20 This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 6e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 28 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - PEFT 0.13.2 - Transformers 4.46.1 - Pytorch 2.5.1+cu124 - Datasets 3.0.2 - Tokenizers 0.20.1
LLM360/guru-32B
LLM360
2025-06-19T23:23:10Z
49,301
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "arxiv:2506.14965", "license:cc-by-nc-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-15T04:29:12Z
--- library_name: transformers pipeline_tag: text-generation license: cc-by-nc-4.0 --- This repository contains the Guru-32B (base Qwen2.5-32B) model presented in [Revisiting Reinforcement Learning for LLM Reasoning from A Cross-Domain Perspective](https://huggingface.co/papers/2506.14965). The leaderboard is evaluated with our evaluation [code](https://github.com/LLM360/Reasoning360/tree/main/scripts/offline_eval). The parameters we set in evaluation for all models: temperature=1.0, top_p=0.7. | **Domain** | **Benchmark** | **GURU 7B** | **General Reasoner 7B** | **ORZ 7B◇** | **SimpleRL 7B** | **GURU 32B** | **ORZ 32B◇** | **SimpleRL 32B** | |----------------|--------------------------------|------------:|-------------------------:|------------:|---------------:|-------------:|--------------:|-----------------:| | **Math** | AIME24 (avg@32) | 17.50 | 17.08 | 16.25 | 15.60 | 34.89 | 47.50 | 27.20 | | | MATH500 | 77.25 | 70.40 | 80.80 | 87.00 | 86.00 | 89.80 | 89.60 | | **Code** | LiveCodeBench (avg@4) | 16.49 | 8.51 | 5.47 | 6.72 | 29.30 | 22.04 | 19.80 | | | HumanEval (avg@4) | 82.62 | 61.12 | 67.38 | 58.08 | 90.85 | 84.30 | 81.25 | | | MBPP | 70.00 | 39.80 | 48.40 | 49.60 | 78.80 | 74.20 | 76.75 | | **Science** | GPQA-diamond (avg@4) | 40.78 | 38.64 | 37.63 | 35.98 | 50.63 | 55.67 | 46.46 | | | SuperGPQA | 31.80 | 30.64 | 29.75 | 27.29 | 43.60 | 46.05 | 37.73 | | **Logic** | ARC-AGI (avg@4) | 3.31 | 0.75 | 0.00 | 0.50 | 7.63 | 2.31 | 5.25 | | | Zebra Puzzle (avg@4) | 39.40 | 0.07 | 1.00 | 0.62 | 45.21 | 0.54 | 1.16 | | **Simulation** | CodeI/O (avg@4) | 15.63 | 7.13 | 5.13 | 6.63 | 12.63 | 3.75 | 9.75 | | | CruxEval-I | 61.72 | 63.63 | 69.38 | 56.25 | 80.63 | 71.13 | 72.63 | | | CruxEval-O | 71.28 | 56.50 | 65.88 | 58.31 | 88.75 | 82.38 | 67.75 | | **Tabular** | FinQA | 34.70 | 34.33 | 37.60 | 35.10 | 46.14 | 45.20 | 45.41 | | | HiTab | 74.20 | 54.40 | 54.10 | 50.40 | 82.00 | 63.30 | 69.00 | | | MultiHiertt (avg@4) | 44.94 | 31.62 | 38.10 | 37.57 | 55.28 | 52.83 | 52.83 | | **Others** | IFEval | 35.81 | 39.56 | 32.72 | 36.69 | 55.45 | 38.26 | 55.27 | | | LiveBench | 18.57 | 19.76 | 12.64 | 15.20 | 34.30 | 28.78 | 28.33 | | | **Average Score** | **43.29** | **33.76** | **35.42** | **33.97** | **54.24** | **47.53** | **46.25** | Example usage: ```python from transformers import AutoTokenizer, AutoModelForCausalLM model = "LLM360/Guru-32B" tokenizer = AutoTokenizer.from_pretrained(model) model = AutoModelForCausalLM.from_pretrained(model, device_map="auto", torch_dtype="auto") messages = [{"role": "user", "content": "What is reinforcement learning?"}] prompt = tokenizer.apply_chat_template(messages, return_tensors="pt").to(model.device) outputs = model.generate(prompt, max_new_tokens=256, temperature=1.0, top_p=0.7) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` Please refer to the [paper](https://arxiv.org/abs/2506.14965) for more details.
asdfre453/ASHM
asdfre453
2025-06-19T23:20:02Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-06-19T22:56:26Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: ASHM --- # Ashm <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `ASHM` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "ASHM", "lora_weights": "https://huggingface.co/asdfre453/ASHM/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('asdfre453/ASHM', weight_name='lora.safetensors') image = pipeline('ASHM').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 2000 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/asdfre453/ASHM/discussions) to add images that show off what you’ve made with this LoRA.
morturr/Llama-2-7b-hf-PAIR_dadjokes_headlines-COMB-headlines-comb-1-seed-18-2025-06-20
morturr
2025-06-19T23:18:11Z
0
0
peft
[ "peft", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:meta-llama/Llama-2-7b-hf", "base_model:adapter:meta-llama/Llama-2-7b-hf", "license:llama2", "region:us" ]
null
2025-06-19T23:17:51Z
--- library_name: peft license: llama2 base_model: meta-llama/Llama-2-7b-hf tags: - trl - sft - generated_from_trainer model-index: - name: Llama-2-7b-hf-PAIR_dadjokes_headlines-COMB-headlines-comb-1-seed-18-2025-06-20 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Llama-2-7b-hf-PAIR_dadjokes_headlines-COMB-headlines-comb-1-seed-18-2025-06-20 This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 6e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 18 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - PEFT 0.13.2 - Transformers 4.46.1 - Pytorch 2.5.1+cu124 - Datasets 3.0.2 - Tokenizers 0.20.1
KonstantinosKK/HIPIE
KonstantinosKK
2025-06-19T23:17:20Z
0
0
null
[ "arxiv:2307.00764", "license:mit", "region:us" ]
null
2025-06-19T05:13:32Z
--- license: mit --- # HIPIE: Hierarchical Open-vocabulary Universal Image Segmentation PyTorch implementation of HIPIE from ["Hierarchical Open-vocabulary Universal Image Segmentation"](https://arxiv.org/abs/2307.00764) (Wang et al., NeurIPS 2023). ## Pretrained Weights We provide ViT-H and ResNet-50 weights for hierarchical and part-aware image segmentation across multiple datasets: | Format | Filename | Description | |--------|----------|-------------| | ViT-H (O365, COCO, RefCOCO, PACO) | `vit_h_cloud.pth` | Pretrained with O365,COCO,RefCOCO,PACO | | ViT-H (COCO, RefCOCO, Pascal-Parts) | `vit_h_cloud_parts.pth` | Finetuned on COCO,RefCOCO,Pascal-Parts | | ResNet-50 (Pascal-Parts) | `r50_parts.pth` | Pretrained with O365,COCO,RefCOCO,Pascal Panoptic Parts | ## Usage For demo notebooks, model configs, and inference scripts, see the [GitHub repository](https://github.com/berkeley-hipie/HIPIE). ## Citation ``` @inproceedings{wang2023hierarchical, title={Hierarchical Open-vocabulary Universal Image Segmentation}, author={Wang, Xudong and Li, Shufan and Kallidromitis, Konstantinos and Kato, Yusuke and Kozuka, Kazuki and Darrell, Trevor}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023} } ```
busetolunay/apps-lora3
busetolunay
2025-06-19T23:14:21Z
0
0
diffusers
[ "diffusers", "text-to-image", "flux", "lora", "template:sd-lora", "fluxgym", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-06-19T23:13:01Z
--- tags: - text-to-image - flux - lora - diffusers - template:sd-lora - fluxgym widget: - output: url: sample/apps-lora3_002400_00_20250619225750.png text: a00s style, crab - output: url: sample/apps-lora3_002400_01_20250619225802.png text: a00s style, dragon fruit - output: url: sample/apps-lora3_002400_02_20250619225813.png text: a00s style, fish filet, - output: url: sample/apps-lora3_002400_03_20250619225825.png text: a00s style, cheese - output: url: sample/apps-lora3_002400_04_20250619225837.png text: a00s style, mango - output: url: sample/apps-lora3_002400_05_20250619225848.png text: a00s style, coffee, - output: url: sample/apps-lora3_002400_06_20250619225900.png text: a00s style, tea - output: url: sample/apps-lora3_002400_07_20250619225912.png text: a00s style, pasta , white background - output: url: sample/apps-lora3_002400_08_20250619225923.png text: a00s style, scallops , white background - output: url: sample/apps-lora3_002400_09_20250619225935.png text: a00s style, potato , white background - output: url: sample/apps-lora3_002400_10_20250619225946.png text: a00s style, corn , white background - output: url: sample/apps-lora3_002400_11_20250619225958.png text: a00s style, steak , white background base_model: black-forest-labs/FLUX.1-dev instance_prompt: a00s license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md --- # apps_lora3 A Flux LoRA trained on a local computer with [Fluxgym](https://github.com/cocktailpeanut/fluxgym) <Gallery /> ## Trigger words You should use `a00s` to trigger the image generation. ## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, Forge, etc. Weights for this model are available in Safetensors format.
TOTORONG/Mistral_Large_HF
TOTORONG
2025-06-19T23:11:33Z
0
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "text-generation-inference", "unsloth", "conversational", "en", "base_model:unsloth/Mistral-Large-Instruct-2407-bnb-4bit", "base_model:finetune:unsloth/Mistral-Large-Instruct-2407-bnb-4bit", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-06-19T22:26:32Z
--- base_model: unsloth/Mistral-Large-Instruct-2407-bnb-4bit tags: - text-generation-inference - transformers - unsloth - mistral license: apache-2.0 language: - en --- # Uploaded finetuned model - **Developed by:** TOTORONG - **License:** apache-2.0 - **Finetuned from model :** unsloth/Mistral-Large-Instruct-2407-bnb-4bit This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
veronikayu/paligemma-imgtojson-0009
veronikayu
2025-06-19T23:11:10Z
0
0
peft
[ "peft", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:google/paligemma-3b-pt-224", "base_model:adapter:google/paligemma-3b-pt-224", "license:gemma", "region:us" ]
null
2025-06-19T21:05:32Z
--- library_name: peft license: gemma base_model: google/paligemma-3b-pt-224 tags: - trl - sft - generated_from_trainer model-index: - name: paligemma-imgtojson-0009 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # paligemma-imgtojson-0009 This model is a fine-tuned version of [google/paligemma-3b-pt-224](https://huggingface.co/google/paligemma-3b-pt-224) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1914 - Json Validity: 1.0 - Structure Similarity: 0.8133 - Value Accuracy: 0.7112 - Field F1: 0.8733 - Value Similarity: 0.8717 - Edit Distance: 0.1063 - Overall Score: 0.6751 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 32 - optimizer: Use adamw_torch_fused with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Json Validity | Structure Similarity | Value Accuracy | Field F1 | Value Similarity | Edit Distance | Overall Score | |:-------------:|:-----:|:----:|:---------------:|:-------------:|:--------------------:|:--------------:|:--------:|:----------------:|:-------------:|:-------------:| | 10.6694 | 0.2 | 5 | 0.6766 | 1.0 | 0.0696 | 0.0667 | 0.1184 | 0.0673 | 0.8226 | 0.2264 | | 4.6432 | 0.4 | 10 | 0.3651 | 1.0 | 0.2775 | 0.1939 | 0.3752 | 0.3672 | 0.4105 | 0.3245 | | 3.2508 | 0.6 | 15 | 0.3025 | 1.0 | 0.5016 | 0.4425 | 0.6046 | 0.6259 | 0.2402 | 0.4840 | | 2.7727 | 0.8 | 20 | 0.2589 | 1.0 | 0.6407 | 0.5839 | 0.7359 | 0.7741 | 0.1723 | 0.5833 | | 2.5086 | 1.0 | 25 | 0.2428 | 1.0 | 0.5942 | 0.6444 | 0.6892 | 0.7824 | 0.2291 | 0.5925 | | 2.3094 | 1.2 | 30 | 0.2291 | 1.0 | 0.5798 | 0.5271 | 0.6715 | 0.6693 | 0.2127 | 0.5320 | | 1.9765 | 1.4 | 35 | 0.2187 | 1.0 | 0.7352 | 0.6418 | 0.8149 | 0.8423 | 0.1415 | 0.6365 | | 1.9071 | 1.6 | 40 | 0.2071 | 1.0 | 0.6705 | 0.5740 | 0.7602 | 0.7852 | 0.1680 | 0.5928 | | 1.8623 | 1.8 | 45 | 0.2039 | 1.0 | 0.7995 | 0.7513 | 0.8601 | 0.8824 | 0.1163 | 0.6830 | | 1.6788 | 2.0 | 50 | 0.1969 | 1.0 | 0.6933 | 0.7178 | 0.7784 | 0.8735 | 0.1655 | 0.6505 | | 1.6307 | 2.2 | 55 | 0.1932 | 1.0 | 0.8340 | 0.7429 | 0.8830 | 0.8787 | 0.0850 | 0.6845 | | 1.3213 | 2.4 | 60 | 0.1940 | 1.0 | 0.7847 | 0.8190 | 0.8540 | 0.9365 | 0.1023 | 0.7034 | | 1.4182 | 2.6 | 65 | 0.1924 | 1.0 | 0.8068 | 0.6911 | 0.8656 | 0.8716 | 0.0947 | 0.6663 | | 1.3986 | 2.8 | 70 | 0.1909 | 1.0 | 0.7593 | 0.8142 | 0.8333 | 0.9237 | 0.1205 | 0.6947 | | 1.554 | 3.0 | 75 | 0.1914 | 1.0 | 0.8133 | 0.7112 | 0.8733 | 0.8717 | 0.1063 | 0.6751 | ### Framework versions - PEFT 0.15.2 - Transformers 4.47.1 - Pytorch 2.7.1+cu126 - Datasets 3.6.0 - Tokenizers 0.21.1
arhansd1/csv_cleaner
arhansd1
2025-06-19T23:07:03Z
4
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:unsloth/mistral-7b-instruct-v0.2-bnb-4bit", "base_model:adapter:unsloth/mistral-7b-instruct-v0.2-bnb-4bit", "region:us" ]
null
2025-06-11T22:16:51Z
--- base_model: unsloth/mistral-7b-instruct-v0.2-bnb-4bit library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.15.2
santoshmds21/distilbert-base-uncased-lora-text-classification
santoshmds21
2025-06-19T23:03:06Z
0
0
peft
[ "peft", "tensorboard", "safetensors", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:adapter:distilbert/distilbert-base-uncased", "license:apache-2.0", "region:us" ]
null
2025-06-19T23:02:37Z
--- library_name: peft license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer metrics: - accuracy model-index: - name: distilbert-base-uncased-lora-text-classification results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-lora-text-classification This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.0886 - Accuracy: {'accuracy': 0.877} ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:-------------------:| | No log | 1.0 | 250 | 0.4107 | {'accuracy': 0.89} | | 0.4454 | 2.0 | 500 | 0.4281 | {'accuracy': 0.878} | | 0.4454 | 3.0 | 750 | 0.7015 | {'accuracy': 0.874} | | 0.1947 | 4.0 | 1000 | 0.7794 | {'accuracy': 0.876} | | 0.1947 | 5.0 | 1250 | 0.9013 | {'accuracy': 0.874} | | 0.0711 | 6.0 | 1500 | 0.8924 | {'accuracy': 0.88} | | 0.0711 | 7.0 | 1750 | 0.9679 | {'accuracy': 0.881} | | 0.018 | 8.0 | 2000 | 1.0589 | {'accuracy': 0.881} | | 0.018 | 9.0 | 2250 | 1.0792 | {'accuracy': 0.886} | | 0.0036 | 10.0 | 2500 | 1.0886 | {'accuracy': 0.877} | ### Framework versions - PEFT 0.15.2 - Transformers 4.52.4 - Pytorch 2.6.0+cu124 - Datasets 3.6.0 - Tokenizers 0.21.1
Mungert/osmosis-mcp-4b-GGUF
Mungert
2025-06-19T22:57:16Z
0
0
null
[ "gguf", "arxiv:2503.20783", "license:apache-2.0", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
2025-06-19T20:14:51Z
--- license: apache-2.0 --- # <span style="color: #7FFF7F;">osmosis-mcp-4b GGUF Models</span> ## <span style="color: #7F7FFF;">Model Generation Details</span> This model was generated using [llama.cpp](https://github.com/ggerganov/llama.cpp) at commit [`7f4fbe51`](https://github.com/ggerganov/llama.cpp/commit/7f4fbe5183b23b6b2e25fd1ccc5d1fa8bb010cb7). --- ## <span style="color: #7FFF7F;">Quantization Beyond the IMatrix</span> I've been experimenting with a new quantization approach that selectively elevates the precision of key layers beyond what the default IMatrix configuration provides. In my testing, standard IMatrix quantization underperforms at lower bit depths, especially with Mixture of Experts (MoE) models. To address this, I'm using the `--tensor-type` option in `llama.cpp` to manually "bump" important layers to higher precision. You can see the implementation here: 👉 [Layer bumping with llama.cpp](https://github.com/Mungert69/GGUFModelBuilder/blob/main/model-converter/tensor_list_builder.py) While this does increase model file size, it significantly improves precision for a given quantization level. ### **I'd love your feedback—have you tried this? How does it perform for you?** --- <a href="https://readyforquantum.com/huggingface_gguf_selection_guide.html" style="color: #7FFF7F;"> Click here to get info on choosing the right GGUF model format </a> --- <!--Begin Original Model Card--> ### Overview Osmosis-MCP-4B is based on the Qwen3-4B model, fine-tuned with reinforcement learning to excel at multi step MCP-style tool usage. We trained Osmosis-MCP-4B using a custom curriculum of **multi-turn, tool-reliant prompts** that mimic real-world use cases — for example: > *"Given the weather in San Francisco, what are the top hiking locations?"* In addition, we provide a list of deterministic MCP like functions and mock server side behavior for the model to call and use. This requires the model to reason through multiple tool invocations (e.g., weather → location ranker), and choose tools over intuition when applicable. --- ### Training Approach Our training pipeline leverages: - [**Dr. GRPO**](https://arxiv.org/abs/2503.20783) for stable and sample-efficient reinforcement learning. - **Synthetic multi-step MCP interactions** with strong tool chaining behavior, generated using our internal data engine. - **SGLang + VeRL** for efficient multi-turn rollout environments, built on top of Qwen3-4B for its function-calling capabilities. Through this training methodology, we observed a notable behavioral shift: the model **prefers invoking tools** when appropriate, instead of relying solely on pre-trained intuition — a key milestone for MCP-native agents. --- ### Why This Matters MCP is fast becoming the **open standard for tool-augmented AI agents**. However: - Most top-performing models (e.g., Claude 3.7 Sonnet, Gemini 2.5 Pro) are closed. - Tool sprawl across clients and servers creates complexity. - Open models often lack the training to effectively **use tools** at all. Osmosis-MCP-4B addresses all three — it’s small, powerful, and practical. --- <!--End Original Model Card--> --- # <span id="testllm" style="color: #7F7FFF;">🚀 If you find these models useful</span> Help me test my **AI-Powered Quantum Network Monitor Assistant** with **quantum-ready security checks**: 👉 [Quantum Network Monitor](https://readyforquantum.com/?assistant=open&utm_source=huggingface&utm_medium=referral&utm_campaign=huggingface_repo_readme) The full Open Source Code for the Quantum Network Monitor Service available at my github repos ( repos with NetworkMonitor in the name) : [Source Code Quantum Network Monitor](https://github.com/Mungert69). You will also find the code I use to quantize the models if you want to do it yourself [GGUFModelBuilder](https://github.com/Mungert69/GGUFModelBuilder) 💬 **How to test**: Choose an **AI assistant type**: - `TurboLLM` (GPT-4.1-mini) - `HugLLM` (Hugginface Open-source models) - `TestLLM` (Experimental CPU-only) ### **What I’m Testing** I’m pushing the limits of **small open-source models for AI network monitoring**, specifically: - **Function calling** against live network services - **How small can a model go** while still handling: - Automated **Nmap security scans** - **Quantum-readiness checks** - **Network Monitoring tasks** 🟡 **TestLLM** – Current experimental model (llama.cpp on 2 CPU threads on huggingface docker space): - ✅ **Zero-configuration setup** - ⏳ 30s load time (slow inference but **no API costs**) . No token limited as the cost is low. - 🔧 **Help wanted!** If you’re into **edge-device AI**, let’s collaborate! ### **Other Assistants** 🟢 **TurboLLM** – Uses **gpt-4.1-mini** : - **It performs very well but unfortunatly OpenAI charges per token. For this reason tokens usage is limited. - **Create custom cmd processors to run .net code on Quantum Network Monitor Agents** - **Real-time network diagnostics and monitoring** - **Security Audits** - **Penetration testing** (Nmap/Metasploit) 🔵 **HugLLM** – Latest Open-source models: - 🌐 Runs on Hugging Face Inference API. Performs pretty well using the lastest models hosted on Novita. ### 💡 **Example commands you could test**: 1. `"Give me info on my websites SSL certificate"` 2. `"Check if my server is using quantum safe encyption for communication"` 3. `"Run a comprehensive security audit on my server"` 4. '"Create a cmd processor to .. (what ever you want)" Note you need to install a [Quantum Network Monitor Agent](https://readyforquantum.com/Download/?utm_source=huggingface&utm_medium=referral&utm_campaign=huggingface_repo_readme) to run the .net code on. This is a very flexible and powerful feature. Use with caution! ### Final Word I fund the servers used to create these model files, run the Quantum Network Monitor service, and pay for inference from Novita and OpenAI—all out of my own pocket. All the code behind the model creation and the Quantum Network Monitor project is [open source](https://github.com/Mungert69). Feel free to use whatever you find helpful. If you appreciate the work, please consider [buying me a coffee](https://www.buymeacoffee.com/mahadeva) ☕. Your support helps cover service costs and allows me to raise token limits for everyone. I'm also open to job opportunities or sponsorship. Thank you! 😊
ICONNAI/ICONN-1
ICONNAI
2025-06-19T22:56:05Z
77
42
transformers
[ "transformers", "safetensors", "mixtral", "text-generation", "emotional-ai", "ICONN", "chatbot", "base", "conversational", "license:other", "co2_eq_emissions", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-17T16:06:51Z
--- license: other license_name: iconn license_link: LICENSE library_name: transformers tags: - emotional-ai - ICONN - chatbot - base co2_eq_emissions: emissions: 1.34 source: CodeCarbon training_type: pretraining geographical_location: US-West hardware_used: 9 x B200 pipeline_tag: text-generation --- ![ICONN AI Logo](https://i.postimg.cc/gJwHqh1D/svgviewer-png-output.png) <div align="center" style="line-height: 1;"> <a href="https://huggingface.co/collections/ICONNAI/iconn-1-6851e8a88ed4eb66b4fd0132" target="_blank" style="margin: 2px;"> <img alt="ICONN 1 Models" src="https://img.shields.io/badge/📦_ICONN_1_Models-HuggingFace-1CBEEF?style=flat-square&labelColor=2C3E50" style="display: inline-block; vertical-align: middle;" /> </a> <a href="https://huggingface.co/spaces/ICONNAI/ICONN-Mini-Chat" target="_blank" style="margin: 2px;"> <img alt="ICONN 1 Chat" src="https://img.shields.io/badge/💬_ICONN_1_Chat-Online-65C7F9?style=flat-square&labelColor=2C3E50" style="display: inline-block; vertical-align: middle;" /> </a> <a href="https://huggingface.co/ICONNAI" target="_blank" style="margin: 2px;"> <img alt="ICONN on Hugging Face" src="https://img.shields.io/badge/🤗_ICONN_on_HF-ICONNAI-A4BCF0?style=flat-square&labelColor=2C3E50" style="display: inline-block; vertical-align: middle;" /> </a> <a href="https://opensource.org/license/apache-2-0" target="_blank" style="margin: 2px;"> <img alt="License Apache 2.0" src="https://img.shields.io/badge/⚖️_License-Apache_2.0-5C63DA?style=flat-square&labelColor=2C3E50" style="display: inline-block; vertical-align: middle;" /> </a> </div> ## ICONN 1 We proudly introduce **ICONN-1**, the most advanced and human-like open-source artificial intelligence model under 100B parameters of its time. Designed to push the boundaries of natural language understanding and generation, ICONN-1 is built on a **Mixture-of-Experts (MoE)** architecture that enables dynamic routing through specialized expert pathways, allowing for both computational efficiency and enhanced performance. Developed entirely from scratch, ICONN-1 is based on a customized **Mixtral** framework and comprises **88 billion parameters**, with **22 billion parameters actively utilized per token**. This approach allows ICONN-1 to deliver highly nuanced and contextually accurate responses while maintaining the scalability benefits of sparse activation. ICONN-1 is released in two distinct forms to serve different application needs: - **ICONN-1** (this version) is optimized for natural, emotionally resonant, and conversational interactions. - **ICONN-e1** is a specialized variant of the model fine-tuned for advanced reasoning, critical analysis, and complex problem-solving. Together, these models represent a major leap forward in the evolution of AI systems—demonstrating not only deep reasoning but also a commitment to openness, accessibility, and human-aligned intelligence. ## Usage ## System Requirements To run **ICONN 1** effectively, ensure you have: - **4× NVIDIA A100 GPUs** or a **single NVIDIA B100** - **At least 120 GB of system RAM** - **120–192 GB of GPU VRAM** If your system does not meet these requirements—which may be the case for many users—you can still experience ICONN through alternative options: - Use a **quantized version** of ICONN for lower resource consumption - Try the lightweight [**ICONN 1 Mini (7B) (Not out yet)**] > Run the code below to run ICONN 1: ```python from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline import torch def run_iconn_chatbot(model_name="ICONNAI/ICONN-1"): tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained(model_name) device = 0 if torch.cuda.is_available() else -1 chat_pipeline = pipeline( "text-generation", model=model, tokenizer=tokenizer, device=device, max_length=1624, do_sample=True, top_p=0.9, temperature=0.4, pad_token_id=tokenizer.eos_token_id ) print(f"ICONN chatbot running with model: {model_name}. Type 'exit' to quit.") conversation_history = "" while True: user_input = input("You: ") if user_input.lower() == "exit": print("Goodbye!") break conversation_history += f"User: {user_input}\nBot:" response = chat_pipeline(conversation_history, max_length=len(tokenizer.encode(conversation_history)) + 100)[0]['generated_text'] bot_reply = response[len(conversation_history):].strip().split("\n")[0] print(f"Bot: {bot_reply}") conversation_history += f" {bot_reply}\n" if __name__ == "__main__": run_iconn_chatbot() ```
winnieyangwannan/refusal_OLMoE-1B-7B-0924-Instruct_experts_positive-negative-addition-opposite_last_layer_0_2_49
winnieyangwannan
2025-06-19T22:49:37Z
0
0
transformers
[ "transformers", "safetensors", "olmoe", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-06-19T22:47:35Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
winnieyangwannan/refusal_OLMoE-1B-7B-0924-Instruct_experts_positive-negative-addition-opposite_last_layer_8_2_49
winnieyangwannan
2025-06-19T22:47:32Z
0
0
transformers
[ "transformers", "safetensors", "olmoe", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-06-19T22:45:19Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]